00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 496 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3161 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.037 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.037 The recommended git tool is: git 00:00:00.038 using credential 00000000-0000-0000-0000-000000000002 00:00:00.039 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.054 Fetching changes from the remote Git repository 00:00:00.055 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.083 Using shallow fetch with depth 1 00:00:00.083 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.083 > git --version # timeout=10 00:00:00.117 > git --version # 'git version 2.39.2' 00:00:00.117 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.157 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.157 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.663 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.674 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.686 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:03.686 > git config core.sparsecheckout # timeout=10 00:00:03.698 > git read-tree -mu HEAD # timeout=10 00:00:03.716 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:03.733 Commit message: "pool: fixes for VisualBuild class" 00:00:03.733 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:03.823 [Pipeline] Start of Pipeline 00:00:03.836 [Pipeline] library 00:00:03.838 Loading library shm_lib@master 00:00:03.838 Library shm_lib@master is cached. Copying from home. 00:00:03.852 [Pipeline] node 00:00:03.864 Running on VM-host-SM16 in /var/jenkins/workspace/ubuntu20-vg-autotest 00:00:03.866 [Pipeline] { 00:00:03.877 [Pipeline] catchError 00:00:03.879 [Pipeline] { 00:00:03.888 [Pipeline] wrap 00:00:03.895 [Pipeline] { 00:00:03.902 [Pipeline] stage 00:00:03.903 [Pipeline] { (Prologue) 00:00:03.918 [Pipeline] echo 00:00:03.919 Node: VM-host-SM16 00:00:03.923 [Pipeline] cleanWs 00:00:03.930 [WS-CLEANUP] Deleting project workspace... 00:00:03.930 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.935 [WS-CLEANUP] done 00:00:04.118 [Pipeline] setCustomBuildProperty 00:00:04.173 [Pipeline] nodesByLabel 00:00:04.174 Found a total of 2 nodes with the 'sorcerer' label 00:00:04.181 [Pipeline] httpRequest 00:00:04.184 HttpMethod: GET 00:00:04.184 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:04.186 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:04.188 Response Code: HTTP/1.1 200 OK 00:00:04.188 Success: Status code 200 is in the accepted range: 200,404 00:00:04.189 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:04.926 [Pipeline] sh 00:00:05.205 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:05.218 [Pipeline] httpRequest 00:00:05.222 HttpMethod: GET 00:00:05.222 URL: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:05.223 Sending request to url: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:05.235 Response Code: HTTP/1.1 200 OK 00:00:05.235 Success: Status code 200 is in the accepted range: 200,404 00:00:05.236 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:33.545 [Pipeline] sh 00:00:33.828 + tar --no-same-owner -xf spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:37.121 [Pipeline] sh 00:00:37.422 + git -C spdk log --oneline -n5 00:00:37.422 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:00:37.422 5d3fd6726 bdev: Fix a race bug between unregistration and QoS poller 00:00:37.422 fbc673ece test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:00:37.422 3651466d0 test/scheduler: Calculate median of the cpu load samples 00:00:37.422 a7414547f test/scheduler: Make sure stderr is not O_TRUNCated in move_proc() 00:00:37.442 [Pipeline] withCredentials 00:00:37.453 > git --version # timeout=10 00:00:37.466 > git --version # 'git version 2.39.2' 00:00:37.481 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:37.484 [Pipeline] { 00:00:37.494 [Pipeline] retry 00:00:37.496 [Pipeline] { 00:00:37.514 [Pipeline] sh 00:00:37.793 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:38.061 [Pipeline] } 00:00:38.084 [Pipeline] // retry 00:00:38.090 [Pipeline] } 00:00:38.109 [Pipeline] // withCredentials 00:00:38.120 [Pipeline] httpRequest 00:00:38.124 HttpMethod: GET 00:00:38.124 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:38.125 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:38.140 Response Code: HTTP/1.1 200 OK 00:00:38.141 Success: Status code 200 is in the accepted range: 200,404 00:00:38.141 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:49.770 [Pipeline] sh 00:00:50.048 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:51.961 [Pipeline] sh 00:00:52.241 + git -C dpdk log --oneline -n5 00:00:52.241 eeb0605f11 version: 23.11.0 00:00:52.241 238778122a doc: update release notes for 23.11 00:00:52.241 46aa6b3cfc doc: fix description of RSS features 00:00:52.241 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:52.241 7e421ae345 devtools: support skipping forbid rule check 00:00:52.259 [Pipeline] writeFile 00:00:52.276 [Pipeline] sh 00:00:52.556 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:52.568 [Pipeline] sh 00:00:52.847 + cat autorun-spdk.conf 00:00:52.848 SPDK_TEST_UNITTEST=1 00:00:52.848 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.848 SPDK_TEST_NVME=1 00:00:52.848 SPDK_TEST_BLOCKDEV=1 00:00:52.848 SPDK_RUN_ASAN=1 00:00:52.848 SPDK_RUN_UBSAN=1 00:00:52.848 SPDK_TEST_RAID5=1 00:00:52.848 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:52.848 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:52.848 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:52.855 RUN_NIGHTLY=1 00:00:52.857 [Pipeline] } 00:00:52.873 [Pipeline] // stage 00:00:52.889 [Pipeline] stage 00:00:52.891 [Pipeline] { (Run VM) 00:00:52.906 [Pipeline] sh 00:00:53.186 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:53.186 + echo 'Start stage prepare_nvme.sh' 00:00:53.186 Start stage prepare_nvme.sh 00:00:53.186 + [[ -n 6 ]] 00:00:53.186 + disk_prefix=ex6 00:00:53.186 + [[ -n /var/jenkins/workspace/ubuntu20-vg-autotest ]] 00:00:53.186 + [[ -e /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf ]] 00:00:53.186 + source /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf 00:00:53.186 ++ SPDK_TEST_UNITTEST=1 00:00:53.186 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.186 ++ SPDK_TEST_NVME=1 00:00:53.186 ++ SPDK_TEST_BLOCKDEV=1 00:00:53.186 ++ SPDK_RUN_ASAN=1 00:00:53.186 ++ SPDK_RUN_UBSAN=1 00:00:53.186 ++ SPDK_TEST_RAID5=1 00:00:53.186 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:53.186 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:53.186 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:53.186 ++ RUN_NIGHTLY=1 00:00:53.186 + cd /var/jenkins/workspace/ubuntu20-vg-autotest 00:00:53.186 + nvme_files=() 00:00:53.186 + declare -A nvme_files 00:00:53.186 + backend_dir=/var/lib/libvirt/images/backends 00:00:53.186 + nvme_files['nvme.img']=5G 00:00:53.186 + nvme_files['nvme-cmb.img']=5G 00:00:53.186 + nvme_files['nvme-multi0.img']=4G 00:00:53.186 + nvme_files['nvme-multi1.img']=4G 00:00:53.186 + nvme_files['nvme-multi2.img']=4G 00:00:53.186 + nvme_files['nvme-openstack.img']=8G 00:00:53.186 + nvme_files['nvme-zns.img']=5G 00:00:53.186 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:53.186 + (( SPDK_TEST_FTL == 1 )) 00:00:53.186 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:53.186 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:53.186 + for nvme in "${!nvme_files[@]}" 00:00:53.186 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:00:53.186 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.186 + for nvme in "${!nvme_files[@]}" 00:00:53.186 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:00:53.186 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.186 + for nvme in "${!nvme_files[@]}" 00:00:53.186 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:00:53.186 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:53.186 + for nvme in "${!nvme_files[@]}" 00:00:53.186 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:00:53.186 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.186 + for nvme in "${!nvme_files[@]}" 00:00:53.186 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:00:53.186 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.186 + for nvme in "${!nvme_files[@]}" 00:00:53.186 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:00:53.186 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.186 + for nvme in "${!nvme_files[@]}" 00:00:53.186 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:00:53.753 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.753 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:00:53.753 + echo 'End stage prepare_nvme.sh' 00:00:53.753 End stage prepare_nvme.sh 00:00:53.766 [Pipeline] sh 00:00:54.046 + DISTRO=ubuntu2004 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:54.046 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -H -a -v -f ubuntu2004 00:00:54.046 00:00:54.046 DIR=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk/scripts/vagrant 00:00:54.046 SPDK_DIR=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk 00:00:54.046 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu20-vg-autotest 00:00:54.046 HELP=0 00:00:54.046 DRY_RUN=0 00:00:54.046 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img, 00:00:54.046 NVME_DISKS_TYPE=nvme, 00:00:54.046 NVME_AUTO_CREATE=0 00:00:54.046 NVME_DISKS_NAMESPACES=, 00:00:54.046 NVME_CMB=, 00:00:54.046 NVME_PMR=, 00:00:54.046 NVME_ZNS=, 00:00:54.046 NVME_MS=, 00:00:54.046 NVME_FDP=, 00:00:54.046 SPDK_VAGRANT_DISTRO=ubuntu2004 00:00:54.046 SPDK_VAGRANT_VMCPU=10 00:00:54.046 SPDK_VAGRANT_VMRAM=12288 00:00:54.046 SPDK_VAGRANT_PROVIDER=libvirt 00:00:54.046 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:54.046 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:54.046 SPDK_OPENSTACK_NETWORK=0 00:00:54.046 VAGRANT_PACKAGE_BOX=0 00:00:54.046 VAGRANTFILE=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:54.046 FORCE_DISTRO=true 00:00:54.046 VAGRANT_BOX_VERSION= 00:00:54.046 EXTRA_VAGRANTFILES= 00:00:54.046 NIC_MODEL=e1000 00:00:54.046 00:00:54.046 mkdir: created directory '/var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt' 00:00:54.046 /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt /var/jenkins/workspace/ubuntu20-vg-autotest 00:00:57.330 Bringing machine 'default' up with 'libvirt' provider... 00:00:57.897 ==> default: Creating image (snapshot of base box volume). 00:00:58.155 ==> default: Creating domain with the following settings... 00:00:58.155 ==> default: -- Name: ubuntu2004-20.04-1712646987-2220_default_1717793840_5ab1f02ff2f15eb5a2a1 00:00:58.155 ==> default: -- Domain type: kvm 00:00:58.155 ==> default: -- Cpus: 10 00:00:58.155 ==> default: -- Feature: acpi 00:00:58.155 ==> default: -- Feature: apic 00:00:58.155 ==> default: -- Feature: pae 00:00:58.155 ==> default: -- Memory: 12288M 00:00:58.155 ==> default: -- Memory Backing: hugepages: 00:00:58.155 ==> default: -- Management MAC: 00:00:58.155 ==> default: -- Loader: 00:00:58.155 ==> default: -- Nvram: 00:00:58.155 ==> default: -- Base box: spdk/ubuntu2004 00:00:58.155 ==> default: -- Storage pool: default 00:00:58.155 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2004-20.04-1712646987-2220_default_1717793840_5ab1f02ff2f15eb5a2a1.img (20G) 00:00:58.155 ==> default: -- Volume Cache: default 00:00:58.155 ==> default: -- Kernel: 00:00:58.155 ==> default: -- Initrd: 00:00:58.155 ==> default: -- Graphics Type: vnc 00:00:58.155 ==> default: -- Graphics Port: -1 00:00:58.155 ==> default: -- Graphics IP: 127.0.0.1 00:00:58.155 ==> default: -- Graphics Password: Not defined 00:00:58.155 ==> default: -- Video Type: cirrus 00:00:58.155 ==> default: -- Video VRAM: 9216 00:00:58.155 ==> default: -- Sound Type: 00:00:58.155 ==> default: -- Keymap: en-us 00:00:58.155 ==> default: -- TPM Path: 00:00:58.155 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:58.155 ==> default: -- Command line args: 00:00:58.155 ==> default: -> value=-device, 00:00:58.155 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:58.155 ==> default: -> value=-drive, 00:00:58.155 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:00:58.155 ==> default: -> value=-device, 00:00:58.155 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:58.155 ==> default: Creating shared folders metadata... 00:00:58.155 ==> default: Starting domain. 00:01:00.058 ==> default: Waiting for domain to get an IP address... 00:01:10.068 ==> default: Waiting for SSH to become available... 00:01:11.969 ==> default: Configuring and enabling network interfaces... 00:01:13.889 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:19.169 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:22.482 ==> default: Mounting SSHFS shared folder... 00:01:22.741 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output => /home/vagrant/spdk_repo/output 00:01:22.741 ==> default: Checking Mount.. 00:01:25.273 ==> default: Checking Mount.. 00:01:25.531 ==> default: Folder Successfully Mounted! 00:01:25.531 ==> default: Running provisioner: file... 00:01:25.789 default: ~/.gitconfig => .gitconfig 00:01:25.789 00:01:25.789 SUCCESS! 00:01:25.789 00:01:25.789 cd to /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt and type "vagrant ssh" to use. 00:01:25.789 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:25.789 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt" to destroy all trace of vm. 00:01:25.789 00:01:25.798 [Pipeline] } 00:01:25.816 [Pipeline] // stage 00:01:25.825 [Pipeline] dir 00:01:25.826 Running in /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt 00:01:25.828 [Pipeline] { 00:01:25.841 [Pipeline] catchError 00:01:25.843 [Pipeline] { 00:01:25.857 [Pipeline] sh 00:01:26.140 + vagrant ssh-config --host vagrant 00:01:26.140 + sed -ne /^Host/,$p 00:01:26.140 + tee ssh_conf 00:01:30.385 Host vagrant 00:01:30.385 HostName 192.168.121.113 00:01:30.385 User vagrant 00:01:30.385 Port 22 00:01:30.385 UserKnownHostsFile /dev/null 00:01:30.385 StrictHostKeyChecking no 00:01:30.385 PasswordAuthentication no 00:01:30.385 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2004/20.04-1712646987-2220/libvirt/ubuntu2004 00:01:30.385 IdentitiesOnly yes 00:01:30.385 LogLevel FATAL 00:01:30.385 ForwardAgent yes 00:01:30.385 ForwardX11 yes 00:01:30.385 00:01:30.400 [Pipeline] withEnv 00:01:30.403 [Pipeline] { 00:01:30.427 [Pipeline] sh 00:01:30.707 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:30.707 source /etc/os-release 00:01:30.707 [[ -e /image.version ]] && img=$(< /image.version) 00:01:30.707 # Minimal, systemd-like check. 00:01:30.707 if [[ -e /.dockerenv ]]; then 00:01:30.707 # Clear garbage from the node's name: 00:01:30.707 # agt-er_autotest_547-896 -> autotest_547-896 00:01:30.707 # $HOSTNAME is the actual container id 00:01:30.707 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:30.707 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:30.707 # We can assume this is a mount from a host where container is running, 00:01:30.707 # so fetch its hostname to easily identify the target swarm worker. 00:01:30.707 container="$(< /etc/hostname) ($agent)" 00:01:30.707 else 00:01:30.707 # Fallback 00:01:30.707 container=$agent 00:01:30.707 fi 00:01:30.707 fi 00:01:30.707 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:30.707 00:01:31.654 [Pipeline] } 00:01:31.682 [Pipeline] // withEnv 00:01:31.694 [Pipeline] setCustomBuildProperty 00:01:31.717 [Pipeline] stage 00:01:31.720 [Pipeline] { (Tests) 00:01:31.745 [Pipeline] sh 00:01:32.027 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:32.608 [Pipeline] sh 00:01:32.888 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:33.469 [Pipeline] timeout 00:01:33.470 Timeout set to expire in 1 hr 30 min 00:01:33.472 [Pipeline] { 00:01:33.487 [Pipeline] sh 00:01:33.767 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:34.704 HEAD is now at 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:01:34.718 [Pipeline] sh 00:01:34.999 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:35.565 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:35.583 [Pipeline] sh 00:01:35.864 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:36.445 [Pipeline] sh 00:01:36.746 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu20-vg-autotest ./autoruner.sh spdk_repo 00:01:37.313 ++ readlink -f spdk_repo 00:01:37.313 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:37.313 + [[ -n /home/vagrant/spdk_repo ]] 00:01:37.313 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:37.313 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:37.313 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:37.313 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:37.313 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:37.313 + [[ ubuntu20-vg-autotest == pkgdep-* ]] 00:01:37.313 + cd /home/vagrant/spdk_repo 00:01:37.313 + source /etc/os-release 00:01:37.313 ++ NAME=Ubuntu 00:01:37.313 ++ VERSION='20.04.6 LTS (Focal Fossa)' 00:01:37.313 ++ ID=ubuntu 00:01:37.313 ++ ID_LIKE=debian 00:01:37.313 ++ PRETTY_NAME='Ubuntu 20.04.6 LTS' 00:01:37.313 ++ VERSION_ID=20.04 00:01:37.313 ++ HOME_URL=https://www.ubuntu.com/ 00:01:37.313 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:37.313 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:37.313 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:37.313 ++ VERSION_CODENAME=focal 00:01:37.313 ++ UBUNTU_CODENAME=focal 00:01:37.313 + uname -a 00:01:37.313 Linux ubuntu2004-cloud-1712646987-2220 5.4.0-176-generic #196-Ubuntu SMP Fri Mar 22 16:46:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:37.313 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:37.313 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:37.571 Hugepages 00:01:37.571 node hugesize free / total 00:01:37.571 node0 1048576kB 0 / 0 00:01:37.571 node0 2048kB 0 / 0 00:01:37.571 00:01:37.571 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:37.571 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:37.571 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:37.571 + rm -f /tmp/spdk-ld-path 00:01:37.571 + source autorun-spdk.conf 00:01:37.571 ++ SPDK_TEST_UNITTEST=1 00:01:37.571 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.571 ++ SPDK_TEST_NVME=1 00:01:37.571 ++ SPDK_TEST_BLOCKDEV=1 00:01:37.571 ++ SPDK_RUN_ASAN=1 00:01:37.571 ++ SPDK_RUN_UBSAN=1 00:01:37.571 ++ SPDK_TEST_RAID5=1 00:01:37.571 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:37.571 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:37.571 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.571 ++ RUN_NIGHTLY=1 00:01:37.571 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:37.571 + [[ -n '' ]] 00:01:37.571 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:37.571 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:37.571 + for M in /var/spdk/build-*-manifest.txt 00:01:37.571 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:37.571 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:37.571 + for M in /var/spdk/build-*-manifest.txt 00:01:37.571 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:37.571 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:37.571 ++ uname 00:01:37.571 + [[ Linux == \L\i\n\u\x ]] 00:01:37.571 + sudo dmesg -T 00:01:37.571 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:37.571 + sudo dmesg --clear 00:01:37.571 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:37.571 + dmesg_pid=2597 00:01:37.571 + sudo dmesg -Tw 00:01:37.571 + [[ Ubuntu == FreeBSD ]] 00:01:37.571 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.571 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.571 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:37.571 + [[ -x /usr/src/fio-static/fio ]] 00:01:37.571 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:37.571 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:37.571 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:37.571 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:37.572 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:37.572 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:37.572 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:37.572 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:37.572 Test configuration: 00:01:37.572 SPDK_TEST_UNITTEST=1 00:01:37.572 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.572 SPDK_TEST_NVME=1 00:01:37.572 SPDK_TEST_BLOCKDEV=1 00:01:37.572 SPDK_RUN_ASAN=1 00:01:37.572 SPDK_RUN_UBSAN=1 00:01:37.572 SPDK_TEST_RAID5=1 00:01:37.572 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:37.572 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:37.572 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.831 RUN_NIGHTLY=1 20:57:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:37.831 20:57:59 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:37.831 20:57:59 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:37.831 20:57:59 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:37.831 20:57:59 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:37.831 20:57:59 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:37.831 20:57:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:37.831 20:57:59 -- paths/export.sh@5 -- $ export PATH 00:01:37.831 20:57:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:37.831 20:57:59 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:37.831 20:57:59 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:37.831 20:57:59 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1717793879.XXXXXX 00:01:37.831 20:57:59 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1717793879.NPFzRQ 00:01:37.831 20:57:59 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:37.831 20:57:59 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:01:37.831 20:57:59 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:37.831 20:57:59 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:37.831 20:57:59 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:37.831 20:57:59 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:37.831 20:57:59 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:37.831 20:57:59 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:37.831 20:57:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.831 20:57:59 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:01:37.831 20:57:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:37.831 20:57:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:37.831 20:57:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:37.831 20:57:59 -- spdk/autobuild.sh@16 -- $ date -u 00:01:37.831 Fri Jun 7 20:57:59 UTC 2024 00:01:37.831 20:57:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:37.831 LTS-43-g130b9406a 00:01:37.831 20:57:59 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:37.831 20:57:59 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:37.831 20:57:59 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:37.831 20:57:59 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:37.831 20:57:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.831 ************************************ 00:01:37.831 START TEST asan 00:01:37.831 ************************************ 00:01:37.831 using asan 00:01:37.831 20:57:59 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:37.831 00:01:37.831 real 0m0.000s 00:01:37.831 user 0m0.000s 00:01:37.831 sys 0m0.000s 00:01:37.831 ************************************ 00:01:37.831 20:57:59 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:37.831 20:57:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.832 END TEST asan 00:01:37.832 ************************************ 00:01:37.832 20:57:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:37.832 20:57:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:37.832 20:57:59 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:37.832 20:57:59 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:37.832 20:57:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.832 ************************************ 00:01:37.832 START TEST ubsan 00:01:37.832 ************************************ 00:01:37.832 using ubsan 00:01:37.832 20:57:59 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:37.832 00:01:37.832 real 0m0.000s 00:01:37.832 user 0m0.000s 00:01:37.832 sys 0m0.000s 00:01:37.832 20:57:59 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:37.832 20:57:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.832 ************************************ 00:01:37.832 END TEST ubsan 00:01:37.832 ************************************ 00:01:37.832 20:57:59 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:37.832 20:57:59 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:37.832 20:57:59 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:37.832 20:57:59 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:37.832 20:57:59 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:37.832 20:57:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.832 ************************************ 00:01:37.832 START TEST build_native_dpdk 00:01:37.832 ************************************ 00:01:37.832 20:57:59 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:37.832 20:57:59 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:37.832 20:57:59 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:37.832 20:57:59 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:37.832 20:57:59 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:37.832 20:57:59 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:37.832 20:57:59 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:37.832 20:57:59 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:37.832 20:57:59 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:37.832 20:57:59 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:37.832 20:57:59 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:37.832 20:57:59 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:37.832 20:57:59 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:37.832 20:57:59 -- common/autobuild_common.sh@68 -- $ compiler_version=9 00:01:37.832 20:57:59 -- common/autobuild_common.sh@69 -- $ compiler_version=9 00:01:37.832 20:57:59 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:37.832 20:57:59 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:37.832 20:57:59 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:37.832 20:57:59 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:37.832 20:57:59 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:37.832 20:57:59 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:37.832 eeb0605f11 version: 23.11.0 00:01:37.832 238778122a doc: update release notes for 23.11 00:01:37.832 46aa6b3cfc doc: fix description of RSS features 00:01:37.832 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:37.832 7e421ae345 devtools: support skipping forbid rule check 00:01:37.832 20:57:59 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:37.832 20:57:59 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:37.832 20:57:59 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:37.832 20:57:59 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:37.832 20:57:59 -- common/autobuild_common.sh@89 -- $ [[ 9 -ge 5 ]] 00:01:37.832 20:57:59 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:37.832 20:57:59 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:37.832 20:57:59 -- common/autobuild_common.sh@93 -- $ [[ 9 -ge 10 ]] 00:01:37.832 20:57:59 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:37.832 20:57:59 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:37.832 20:57:59 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:37.832 20:57:59 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:37.832 20:57:59 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:37.832 20:57:59 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:37.832 20:57:59 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:37.832 20:57:59 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:37.832 20:57:59 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:37.832 20:57:59 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:37.832 20:57:59 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:37.832 20:57:59 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:37.832 20:57:59 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:37.832 20:57:59 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:37.832 20:57:59 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:37.832 20:57:59 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:37.832 20:57:59 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:37.832 20:57:59 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:37.832 20:57:59 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:37.832 20:57:59 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:37.832 20:57:59 -- scripts/common.sh@343 -- $ case "$op" in 00:01:37.832 20:57:59 -- scripts/common.sh@344 -- $ : 1 00:01:37.832 20:57:59 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:37.832 20:57:59 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:37.832 20:57:59 -- scripts/common.sh@364 -- $ decimal 23 00:01:37.832 20:57:59 -- scripts/common.sh@352 -- $ local d=23 00:01:37.832 20:57:59 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:37.832 20:57:59 -- scripts/common.sh@354 -- $ echo 23 00:01:37.832 20:57:59 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:37.832 20:57:59 -- scripts/common.sh@365 -- $ decimal 21 00:01:37.832 20:57:59 -- scripts/common.sh@352 -- $ local d=21 00:01:37.832 20:57:59 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:37.832 20:57:59 -- scripts/common.sh@354 -- $ echo 21 00:01:37.832 20:57:59 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:37.832 20:57:59 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:37.832 20:57:59 -- scripts/common.sh@366 -- $ return 1 00:01:37.832 20:57:59 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:37.832 patching file config/rte_config.h 00:01:37.832 Hunk #1 succeeded at 60 (offset 1 line). 00:01:37.832 20:57:59 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:37.832 20:57:59 -- common/autobuild_common.sh@178 -- $ uname -s 00:01:37.832 20:57:59 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:37.832 20:57:59 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:37.832 20:57:59 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:43.096 The Meson build system 00:01:43.096 Version: 1.4.0 00:01:43.096 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:43.096 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:43.096 Build type: native build 00:01:43.096 Program cat found: YES (/usr/bin/cat) 00:01:43.096 Project name: DPDK 00:01:43.096 Project version: 23.11.0 00:01:43.096 C compiler for the host machine: gcc (gcc 9.4.0 "gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:01:43.096 C linker for the host machine: gcc ld.bfd 2.34 00:01:43.096 Host machine cpu family: x86_64 00:01:43.096 Host machine cpu: x86_64 00:01:43.096 Message: ## Building in Developer Mode ## 00:01:43.096 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:43.096 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:43.096 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:43.096 Program python3 found: YES (/usr/bin/python3) 00:01:43.096 Program cat found: YES (/usr/bin/cat) 00:01:43.096 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:43.096 Compiler for C supports arguments -march=native: YES 00:01:43.096 Checking for size of "void *" : 8 00:01:43.096 Checking for size of "void *" : 8 (cached) 00:01:43.096 Library m found: YES 00:01:43.096 Library numa found: YES 00:01:43.096 Has header "numaif.h" : YES 00:01:43.096 Library fdt found: NO 00:01:43.096 Library execinfo found: NO 00:01:43.096 Has header "execinfo.h" : YES 00:01:43.096 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:01:43.096 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:43.096 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:43.096 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:43.096 Run-time dependency openssl found: YES 1.1.1f 00:01:43.097 Run-time dependency libpcap found: NO (tried pkgconfig) 00:01:43.097 Library pcap found: NO 00:01:43.097 Compiler for C supports arguments -Wcast-qual: YES 00:01:43.097 Compiler for C supports arguments -Wdeprecated: YES 00:01:43.097 Compiler for C supports arguments -Wformat: YES 00:01:43.097 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:43.097 Compiler for C supports arguments -Wformat-security: YES 00:01:43.097 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.097 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:43.097 Compiler for C supports arguments -Wnested-externs: YES 00:01:43.097 Compiler for C supports arguments -Wold-style-definition: YES 00:01:43.097 Compiler for C supports arguments -Wpointer-arith: YES 00:01:43.097 Compiler for C supports arguments -Wsign-compare: YES 00:01:43.097 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:43.097 Compiler for C supports arguments -Wundef: YES 00:01:43.097 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.097 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:43.097 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:43.097 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.097 Program objdump found: YES (/usr/bin/objdump) 00:01:43.097 Compiler for C supports arguments -mavx512f: YES 00:01:43.097 Checking if "AVX512 checking" compiles: YES 00:01:43.097 Fetching value of define "__SSE4_2__" : 1 00:01:43.097 Fetching value of define "__AES__" : 1 00:01:43.097 Fetching value of define "__AVX__" : 1 00:01:43.097 Fetching value of define "__AVX2__" : 1 00:01:43.097 Fetching value of define "__AVX512BW__" : (undefined) 00:01:43.097 Fetching value of define "__AVX512CD__" : (undefined) 00:01:43.097 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:43.097 Fetching value of define "__AVX512F__" : (undefined) 00:01:43.097 Fetching value of define "__AVX512VL__" : (undefined) 00:01:43.097 Fetching value of define "__PCLMUL__" : 1 00:01:43.097 Fetching value of define "__RDRND__" : 1 00:01:43.097 Fetching value of define "__RDSEED__" : 1 00:01:43.097 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:43.097 Fetching value of define "__znver1__" : (undefined) 00:01:43.097 Fetching value of define "__znver2__" : (undefined) 00:01:43.097 Fetching value of define "__znver3__" : (undefined) 00:01:43.097 Fetching value of define "__znver4__" : (undefined) 00:01:43.097 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:43.097 Message: lib/log: Defining dependency "log" 00:01:43.097 Message: lib/kvargs: Defining dependency "kvargs" 00:01:43.097 Message: lib/telemetry: Defining dependency "telemetry" 00:01:43.097 Checking for function "getentropy" : NO 00:01:43.097 Message: lib/eal: Defining dependency "eal" 00:01:43.097 Message: lib/ring: Defining dependency "ring" 00:01:43.097 Message: lib/rcu: Defining dependency "rcu" 00:01:43.097 Message: lib/mempool: Defining dependency "mempool" 00:01:43.097 Message: lib/mbuf: Defining dependency "mbuf" 00:01:43.097 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:43.097 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:43.097 Compiler for C supports arguments -mpclmul: YES 00:01:43.097 Compiler for C supports arguments -maes: YES 00:01:43.097 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:43.097 Compiler for C supports arguments -mavx512bw: YES 00:01:43.097 Compiler for C supports arguments -mavx512dq: YES 00:01:43.097 Compiler for C supports arguments -mavx512vl: YES 00:01:43.097 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:43.097 Compiler for C supports arguments -mavx2: YES 00:01:43.097 Compiler for C supports arguments -mavx: YES 00:01:43.097 Message: lib/net: Defining dependency "net" 00:01:43.097 Message: lib/meter: Defining dependency "meter" 00:01:43.097 Message: lib/ethdev: Defining dependency "ethdev" 00:01:43.097 Message: lib/pci: Defining dependency "pci" 00:01:43.097 Message: lib/cmdline: Defining dependency "cmdline" 00:01:43.097 Message: lib/metrics: Defining dependency "metrics" 00:01:43.097 Message: lib/hash: Defining dependency "hash" 00:01:43.097 Message: lib/timer: Defining dependency "timer" 00:01:43.097 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:43.097 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:43.097 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:43.097 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:43.097 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:43.097 Message: lib/acl: Defining dependency "acl" 00:01:43.097 Message: lib/bbdev: Defining dependency "bbdev" 00:01:43.097 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:43.097 Run-time dependency libelf found: YES 0.176 00:01:43.097 lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled 00:01:43.097 Message: lib/bpf: Defining dependency "bpf" 00:01:43.097 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:43.097 Message: lib/compressdev: Defining dependency "compressdev" 00:01:43.097 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:43.097 Message: lib/distributor: Defining dependency "distributor" 00:01:43.097 Message: lib/dmadev: Defining dependency "dmadev" 00:01:43.097 Message: lib/efd: Defining dependency "efd" 00:01:43.097 Message: lib/eventdev: Defining dependency "eventdev" 00:01:43.097 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:43.097 Message: lib/gpudev: Defining dependency "gpudev" 00:01:43.097 Message: lib/gro: Defining dependency "gro" 00:01:43.097 Message: lib/gso: Defining dependency "gso" 00:01:43.097 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:43.097 Message: lib/jobstats: Defining dependency "jobstats" 00:01:43.097 Message: lib/latencystats: Defining dependency "latencystats" 00:01:43.097 Message: lib/lpm: Defining dependency "lpm" 00:01:43.097 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:43.097 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:43.097 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:43.097 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:43.097 Message: lib/member: Defining dependency "member" 00:01:43.097 Message: lib/pcapng: Defining dependency "pcapng" 00:01:43.097 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:43.097 Message: lib/power: Defining dependency "power" 00:01:43.097 Message: lib/rawdev: Defining dependency "rawdev" 00:01:43.097 Message: lib/regexdev: Defining dependency "regexdev" 00:01:43.097 Message: lib/mldev: Defining dependency "mldev" 00:01:43.097 Message: lib/rib: Defining dependency "rib" 00:01:43.097 Message: lib/reorder: Defining dependency "reorder" 00:01:43.097 Message: lib/sched: Defining dependency "sched" 00:01:43.097 Message: lib/security: Defining dependency "security" 00:01:43.097 Message: lib/stack: Defining dependency "stack" 00:01:43.097 Has header "linux/userfaultfd.h" : YES 00:01:43.097 Has header "linux/vduse.h" : NO 00:01:43.097 Message: lib/vhost: Defining dependency "vhost" 00:01:43.097 Message: lib/ipsec: Defining dependency "ipsec" 00:01:43.097 Message: lib/pdcp: Defining dependency "pdcp" 00:01:43.097 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:43.097 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:43.097 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:43.097 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:43.097 Message: lib/fib: Defining dependency "fib" 00:01:43.097 Message: lib/port: Defining dependency "port" 00:01:43.097 Message: lib/pdump: Defining dependency "pdump" 00:01:43.097 Message: lib/table: Defining dependency "table" 00:01:43.097 Message: lib/pipeline: Defining dependency "pipeline" 00:01:43.097 Message: lib/graph: Defining dependency "graph" 00:01:43.097 Message: lib/node: Defining dependency "node" 00:01:43.097 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:44.475 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:44.475 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:44.475 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:44.475 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:44.475 Compiler for C supports arguments -Wno-unused-value: YES 00:01:44.475 Compiler for C supports arguments -Wno-format: YES 00:01:44.475 Compiler for C supports arguments -Wno-format-security: YES 00:01:44.475 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:44.475 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:44.475 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:44.475 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:44.475 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:44.475 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:44.475 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:44.475 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:44.475 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:44.475 Has header "sys/epoll.h" : YES 00:01:44.475 Program doxygen found: YES (/usr/bin/doxygen) 00:01:44.475 Configuring doxy-api-html.conf using configuration 00:01:44.475 Configuring doxy-api-man.conf using configuration 00:01:44.475 Program mandb found: YES (/usr/bin/mandb) 00:01:44.475 Program sphinx-build found: NO 00:01:44.475 Configuring rte_build_config.h using configuration 00:01:44.475 Message: 00:01:44.475 ================= 00:01:44.475 Applications Enabled 00:01:44.475 ================= 00:01:44.475 00:01:44.475 apps: 00:01:44.475 graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:44.475 test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, test-pmd, 00:01:44.475 test-regex, test-sad, test-security-perf, 00:01:44.475 00:01:44.475 Message: 00:01:44.475 ================= 00:01:44.475 Libraries Enabled 00:01:44.475 ================= 00:01:44.475 00:01:44.475 libs: 00:01:44.475 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:44.475 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:44.475 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:44.475 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:44.475 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:44.475 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:44.475 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:44.475 00:01:44.475 00:01:44.475 Message: 00:01:44.475 =============== 00:01:44.475 Drivers Enabled 00:01:44.475 =============== 00:01:44.475 00:01:44.475 common: 00:01:44.475 00:01:44.475 bus: 00:01:44.475 pci, vdev, 00:01:44.475 mempool: 00:01:44.475 ring, 00:01:44.475 dma: 00:01:44.475 00:01:44.475 net: 00:01:44.475 i40e, 00:01:44.475 raw: 00:01:44.475 00:01:44.475 crypto: 00:01:44.475 00:01:44.475 compress: 00:01:44.475 00:01:44.475 regex: 00:01:44.475 00:01:44.475 ml: 00:01:44.475 00:01:44.475 vdpa: 00:01:44.475 00:01:44.475 event: 00:01:44.475 00:01:44.475 baseband: 00:01:44.475 00:01:44.475 gpu: 00:01:44.475 00:01:44.475 00:01:44.475 Message: 00:01:44.475 ================= 00:01:44.475 Content Skipped 00:01:44.475 ================= 00:01:44.475 00:01:44.475 apps: 00:01:44.475 dumpcap: missing dependency, "libpcap" 00:01:44.475 00:01:44.475 libs: 00:01:44.475 00:01:44.475 drivers: 00:01:44.475 common/cpt: not in enabled drivers build config 00:01:44.475 common/dpaax: not in enabled drivers build config 00:01:44.475 common/iavf: not in enabled drivers build config 00:01:44.475 common/idpf: not in enabled drivers build config 00:01:44.475 common/mvep: not in enabled drivers build config 00:01:44.475 common/octeontx: not in enabled drivers build config 00:01:44.475 bus/auxiliary: not in enabled drivers build config 00:01:44.475 bus/cdx: not in enabled drivers build config 00:01:44.475 bus/dpaa: not in enabled drivers build config 00:01:44.475 bus/fslmc: not in enabled drivers build config 00:01:44.475 bus/ifpga: not in enabled drivers build config 00:01:44.475 bus/platform: not in enabled drivers build config 00:01:44.475 bus/vmbus: not in enabled drivers build config 00:01:44.475 common/cnxk: not in enabled drivers build config 00:01:44.475 common/mlx5: not in enabled drivers build config 00:01:44.475 common/nfp: not in enabled drivers build config 00:01:44.475 common/qat: not in enabled drivers build config 00:01:44.475 common/sfc_efx: not in enabled drivers build config 00:01:44.475 mempool/bucket: not in enabled drivers build config 00:01:44.475 mempool/cnxk: not in enabled drivers build config 00:01:44.475 mempool/dpaa: not in enabled drivers build config 00:01:44.475 mempool/dpaa2: not in enabled drivers build config 00:01:44.475 mempool/octeontx: not in enabled drivers build config 00:01:44.475 mempool/stack: not in enabled drivers build config 00:01:44.475 dma/cnxk: not in enabled drivers build config 00:01:44.475 dma/dpaa: not in enabled drivers build config 00:01:44.475 dma/dpaa2: not in enabled drivers build config 00:01:44.475 dma/hisilicon: not in enabled drivers build config 00:01:44.475 dma/idxd: not in enabled drivers build config 00:01:44.475 dma/ioat: not in enabled drivers build config 00:01:44.475 dma/skeleton: not in enabled drivers build config 00:01:44.475 net/af_packet: not in enabled drivers build config 00:01:44.475 net/af_xdp: not in enabled drivers build config 00:01:44.475 net/ark: not in enabled drivers build config 00:01:44.475 net/atlantic: not in enabled drivers build config 00:01:44.475 net/avp: not in enabled drivers build config 00:01:44.475 net/axgbe: not in enabled drivers build config 00:01:44.475 net/bnx2x: not in enabled drivers build config 00:01:44.475 net/bnxt: not in enabled drivers build config 00:01:44.475 net/bonding: not in enabled drivers build config 00:01:44.475 net/cnxk: not in enabled drivers build config 00:01:44.475 net/cpfl: not in enabled drivers build config 00:01:44.475 net/cxgbe: not in enabled drivers build config 00:01:44.475 net/dpaa: not in enabled drivers build config 00:01:44.475 net/dpaa2: not in enabled drivers build config 00:01:44.475 net/e1000: not in enabled drivers build config 00:01:44.475 net/ena: not in enabled drivers build config 00:01:44.475 net/enetc: not in enabled drivers build config 00:01:44.475 net/enetfec: not in enabled drivers build config 00:01:44.475 net/enic: not in enabled drivers build config 00:01:44.476 net/failsafe: not in enabled drivers build config 00:01:44.476 net/fm10k: not in enabled drivers build config 00:01:44.476 net/gve: not in enabled drivers build config 00:01:44.476 net/hinic: not in enabled drivers build config 00:01:44.476 net/hns3: not in enabled drivers build config 00:01:44.476 net/iavf: not in enabled drivers build config 00:01:44.476 net/ice: not in enabled drivers build config 00:01:44.476 net/idpf: not in enabled drivers build config 00:01:44.476 net/igc: not in enabled drivers build config 00:01:44.476 net/ionic: not in enabled drivers build config 00:01:44.476 net/ipn3ke: not in enabled drivers build config 00:01:44.476 net/ixgbe: not in enabled drivers build config 00:01:44.476 net/mana: not in enabled drivers build config 00:01:44.476 net/memif: not in enabled drivers build config 00:01:44.476 net/mlx4: not in enabled drivers build config 00:01:44.476 net/mlx5: not in enabled drivers build config 00:01:44.476 net/mvneta: not in enabled drivers build config 00:01:44.476 net/mvpp2: not in enabled drivers build config 00:01:44.476 net/netvsc: not in enabled drivers build config 00:01:44.476 net/nfb: not in enabled drivers build config 00:01:44.476 net/nfp: not in enabled drivers build config 00:01:44.476 net/ngbe: not in enabled drivers build config 00:01:44.476 net/null: not in enabled drivers build config 00:01:44.476 net/octeontx: not in enabled drivers build config 00:01:44.476 net/octeon_ep: not in enabled drivers build config 00:01:44.476 net/pcap: not in enabled drivers build config 00:01:44.476 net/pfe: not in enabled drivers build config 00:01:44.476 net/qede: not in enabled drivers build config 00:01:44.476 net/ring: not in enabled drivers build config 00:01:44.476 net/sfc: not in enabled drivers build config 00:01:44.476 net/softnic: not in enabled drivers build config 00:01:44.476 net/tap: not in enabled drivers build config 00:01:44.476 net/thunderx: not in enabled drivers build config 00:01:44.476 net/txgbe: not in enabled drivers build config 00:01:44.476 net/vdev_netvsc: not in enabled drivers build config 00:01:44.476 net/vhost: not in enabled drivers build config 00:01:44.476 net/virtio: not in enabled drivers build config 00:01:44.476 net/vmxnet3: not in enabled drivers build config 00:01:44.476 raw/cnxk_bphy: not in enabled drivers build config 00:01:44.476 raw/cnxk_gpio: not in enabled drivers build config 00:01:44.476 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:44.476 raw/ifpga: not in enabled drivers build config 00:01:44.476 raw/ntb: not in enabled drivers build config 00:01:44.476 raw/skeleton: not in enabled drivers build config 00:01:44.476 crypto/armv8: not in enabled drivers build config 00:01:44.476 crypto/bcmfs: not in enabled drivers build config 00:01:44.476 crypto/caam_jr: not in enabled drivers build config 00:01:44.476 crypto/ccp: not in enabled drivers build config 00:01:44.476 crypto/cnxk: not in enabled drivers build config 00:01:44.476 crypto/dpaa_sec: not in enabled drivers build config 00:01:44.476 crypto/dpaa2_sec: not in enabled drivers build config 00:01:44.476 crypto/ipsec_mb: not in enabled drivers build config 00:01:44.476 crypto/mlx5: not in enabled drivers build config 00:01:44.476 crypto/mvsam: not in enabled drivers build config 00:01:44.476 crypto/nitrox: not in enabled drivers build config 00:01:44.476 crypto/null: not in enabled drivers build config 00:01:44.476 crypto/octeontx: not in enabled drivers build config 00:01:44.476 crypto/openssl: not in enabled drivers build config 00:01:44.476 crypto/scheduler: not in enabled drivers build config 00:01:44.476 crypto/uadk: not in enabled drivers build config 00:01:44.476 crypto/virtio: not in enabled drivers build config 00:01:44.476 compress/isal: not in enabled drivers build config 00:01:44.476 compress/mlx5: not in enabled drivers build config 00:01:44.476 compress/octeontx: not in enabled drivers build config 00:01:44.476 compress/zlib: not in enabled drivers build config 00:01:44.476 regex/mlx5: not in enabled drivers build config 00:01:44.476 regex/cn9k: not in enabled drivers build config 00:01:44.476 ml/cnxk: not in enabled drivers build config 00:01:44.476 vdpa/ifc: not in enabled drivers build config 00:01:44.476 vdpa/mlx5: not in enabled drivers build config 00:01:44.476 vdpa/nfp: not in enabled drivers build config 00:01:44.476 vdpa/sfc: not in enabled drivers build config 00:01:44.476 event/cnxk: not in enabled drivers build config 00:01:44.476 event/dlb2: not in enabled drivers build config 00:01:44.476 event/dpaa: not in enabled drivers build config 00:01:44.476 event/dpaa2: not in enabled drivers build config 00:01:44.476 event/dsw: not in enabled drivers build config 00:01:44.476 event/opdl: not in enabled drivers build config 00:01:44.476 event/skeleton: not in enabled drivers build config 00:01:44.476 event/sw: not in enabled drivers build config 00:01:44.476 event/octeontx: not in enabled drivers build config 00:01:44.476 baseband/acc: not in enabled drivers build config 00:01:44.476 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:44.476 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:44.476 baseband/la12xx: not in enabled drivers build config 00:01:44.476 baseband/null: not in enabled drivers build config 00:01:44.476 baseband/turbo_sw: not in enabled drivers build config 00:01:44.476 gpu/cuda: not in enabled drivers build config 00:01:44.476 00:01:44.476 00:01:44.476 Build targets in project: 219 00:01:44.476 00:01:44.476 DPDK 23.11.0 00:01:44.476 00:01:44.476 User defined options 00:01:44.476 libdir : lib 00:01:44.476 prefix : /home/vagrant/spdk_repo/dpdk/build 00:01:44.476 c_args : -fPIC -g -fcommon -Werror 00:01:44.476 c_link_args : 00:01:44.476 enable_docs : false 00:01:44.476 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:44.476 enable_kmods : false 00:01:44.476 machine : native 00:01:44.476 tests : false 00:01:44.476 00:01:44.476 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.476 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:44.735 20:58:07 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:01:44.735 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:01:44.735 [1/706] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:44.735 [2/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:44.735 [3/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:44.735 [4/706] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:45.040 [5/706] Linking static target lib/librte_kvargs.a 00:01:45.040 [6/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:45.040 [7/706] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:45.040 [8/706] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:45.040 [9/706] Linking static target lib/librte_log.a 00:01:45.040 [10/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:45.040 [11/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:45.040 [12/706] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.322 [13/706] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:45.322 [14/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:45.322 [15/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:45.322 [16/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:45.322 [17/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:45.580 [18/706] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.580 [19/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:45.580 [20/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:45.580 [21/706] Linking target lib/librte_log.so.24.0 00:01:45.580 [22/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:45.580 [23/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:45.580 [24/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:45.580 [25/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:45.581 [26/706] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:45.839 [27/706] Linking target lib/librte_kvargs.so.24.0 00:01:45.839 [28/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:45.839 [29/706] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:45.839 [30/706] Linking static target lib/librte_telemetry.a 00:01:45.839 [31/706] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:45.839 [32/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:45.839 [33/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:46.098 [34/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:46.098 [35/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:46.098 [36/706] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:46.098 [37/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:46.098 [38/706] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:46.098 [39/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:46.098 [40/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:46.098 [41/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:46.356 [42/706] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:46.356 [43/706] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:46.356 [44/706] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.356 [45/706] Linking target lib/librte_telemetry.so.24.0 00:01:46.356 [46/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:46.615 [47/706] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:46.615 [48/706] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:46.615 [49/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:46.615 [50/706] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:46.615 [51/706] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:46.615 [52/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:46.615 [53/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:46.615 [54/706] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:46.873 [55/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:46.873 [56/706] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:46.873 [57/706] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:46.873 [58/706] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:46.873 [59/706] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:46.873 [60/706] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:46.873 [61/706] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:46.873 [62/706] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:46.873 [63/706] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:46.873 [64/706] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:47.132 [65/706] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:47.132 [66/706] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:47.132 [67/706] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:47.132 [68/706] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:47.391 [69/706] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:47.391 [70/706] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:47.391 [71/706] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:47.391 [72/706] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.391 [73/706] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:47.391 [74/706] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:47.391 [75/706] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:47.391 [76/706] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:47.391 [77/706] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.391 [78/706] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:47.649 [79/706] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:47.649 [80/706] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.649 [81/706] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:47.649 [82/706] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:47.907 [83/706] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:47.907 [84/706] Linking static target lib/librte_ring.a 00:01:47.907 [85/706] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:47.907 [86/706] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.171 [87/706] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.171 [88/706] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.171 [89/706] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.171 [90/706] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.171 [91/706] Linking static target lib/librte_eal.a 00:01:48.171 [92/706] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.171 [93/706] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.171 [94/706] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:48.171 [95/706] Linking static target lib/librte_mempool.a 00:01:48.431 [96/706] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.431 [97/706] Linking static target lib/librte_rcu.a 00:01:48.431 [98/706] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.431 [99/706] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:48.431 [100/706] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:48.690 [101/706] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:48.690 [102/706] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:48.690 [103/706] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.690 [104/706] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.690 [105/706] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.949 [106/706] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:48.949 [107/706] Linking static target lib/librte_net.a 00:01:48.949 [108/706] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:48.949 [109/706] Linking static target lib/librte_meter.a 00:01:49.207 [110/706] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:49.207 [111/706] Linking static target lib/librte_mbuf.a 00:01:49.207 [112/706] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:49.207 [113/706] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.207 [114/706] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.207 [115/706] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:49.207 [116/706] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:49.207 [117/706] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.207 [118/706] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:49.792 [119/706] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:49.792 [120/706] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.792 [121/706] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:50.056 [122/706] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:50.056 [123/706] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:50.314 [124/706] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:50.314 [125/706] Linking static target lib/librte_pci.a 00:01:50.314 [126/706] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:50.314 [127/706] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:50.314 [128/706] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:50.314 [129/706] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.572 [130/706] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:50.572 [131/706] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:50.572 [132/706] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.572 [133/706] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:50.572 [134/706] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:50.572 [135/706] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:50.831 [136/706] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:50.831 [137/706] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:50.831 [138/706] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:50.831 [139/706] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:50.831 [140/706] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:50.831 [141/706] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:50.831 [142/706] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:50.831 [143/706] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:51.089 [144/706] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.089 [145/706] Linking static target lib/librte_cmdline.a 00:01:51.089 [146/706] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.089 [147/706] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:51.348 [148/706] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:51.348 [149/706] Linking static target lib/librte_metrics.a 00:01:51.348 [150/706] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:51.607 [151/706] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.607 [152/706] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:51.607 [153/706] Linking static target lib/librte_timer.a 00:01:51.865 [154/706] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.124 [155/706] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.124 [156/706] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.124 [157/706] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:52.382 [158/706] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:52.382 [159/706] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:52.382 [160/706] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:52.641 [161/706] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:52.899 [162/706] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:52.899 [163/706] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:52.899 [164/706] Linking static target lib/librte_bitratestats.a 00:01:52.899 [165/706] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:53.157 [166/706] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.157 [167/706] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:53.157 [168/706] Linking static target lib/librte_bbdev.a 00:01:53.157 [169/706] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:53.157 [170/706] Linking static target lib/librte_hash.a 00:01:53.415 [171/706] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:53.415 [172/706] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:53.415 [173/706] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:53.415 [174/706] Linking static target lib/acl/libavx2_tmp.a 00:01:53.674 [175/706] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:53.932 [176/706] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:53.932 [177/706] Linking static target lib/librte_ethdev.a 00:01:53.932 [178/706] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.932 [179/706] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:53.932 [180/706] Linking static target lib/acl/libavx512_tmp.a 00:01:53.932 [181/706] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.932 [182/706] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:53.932 [183/706] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:53.932 [184/706] Linking static target lib/librte_acl.a 00:01:54.190 [185/706] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:54.190 [186/706] Linking static target lib/librte_cfgfile.a 00:01:54.190 [187/706] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:54.448 [188/706] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.448 [189/706] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:54.448 [190/706] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:54.448 [191/706] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:54.448 [192/706] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:54.448 [193/706] Linking static target lib/librte_compressdev.a 00:01:54.707 [194/706] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.707 [195/706] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:54.965 [196/706] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:54.965 [197/706] Linking static target lib/librte_bpf.a 00:01:54.965 [198/706] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:55.224 [199/706] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:55.224 [200/706] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:55.224 [201/706] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:55.224 [202/706] Linking static target lib/librte_distributor.a 00:01:55.224 [203/706] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.224 [204/706] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.482 [205/706] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:55.482 [206/706] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.482 [207/706] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:55.482 [208/706] Linking static target lib/librte_dmadev.a 00:01:55.740 [209/706] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:55.998 [210/706] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.293 [211/706] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:56.293 [212/706] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:56.552 [213/706] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:56.552 [214/706] Linking static target lib/librte_efd.a 00:01:56.552 [215/706] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.552 [216/706] Linking target lib/librte_eal.so.24.0 00:01:56.810 [217/706] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:56.810 [218/706] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:56.810 [219/706] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.810 [220/706] Linking target lib/librte_ring.so.24.0 00:01:56.810 [221/706] Linking target lib/librte_meter.so.24.0 00:01:56.810 [222/706] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:56.810 [223/706] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:56.810 [224/706] Linking target lib/librte_rcu.so.24.0 00:01:56.810 [225/706] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:56.810 [226/706] Linking target lib/librte_mempool.so.24.0 00:01:57.068 [227/706] Linking target lib/librte_pci.so.24.0 00:01:57.068 [228/706] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:57.068 [229/706] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:57.068 [230/706] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:57.068 [231/706] Linking target lib/librte_timer.so.24.0 00:01:57.068 [232/706] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:57.068 [233/706] Linking target lib/librte_mbuf.so.24.0 00:01:57.068 [234/706] Linking target lib/librte_acl.so.24.0 00:01:57.068 [235/706] Linking target lib/librte_cfgfile.so.24.0 00:01:57.068 [236/706] Linking static target lib/librte_cryptodev.a 00:01:57.068 [237/706] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:57.068 [238/706] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:57.326 [239/706] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:57.326 [240/706] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:57.326 [241/706] Linking target lib/librte_dmadev.so.24.0 00:01:57.326 [242/706] Linking target lib/librte_net.so.24.0 00:01:57.326 [243/706] Linking target lib/librte_compressdev.so.24.0 00:01:57.326 [244/706] Linking target lib/librte_bbdev.so.24.0 00:01:57.326 [245/706] Linking target lib/librte_distributor.so.24.0 00:01:57.326 [246/706] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:57.326 [247/706] Linking static target lib/librte_dispatcher.a 00:01:57.327 [248/706] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:57.327 [249/706] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:57.327 [250/706] Linking target lib/librte_cmdline.so.24.0 00:01:57.327 [251/706] Linking target lib/librte_hash.so.24.0 00:01:57.327 [252/706] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:57.585 [253/706] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:57.585 [254/706] Linking target lib/librte_efd.so.24.0 00:01:57.843 [255/706] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.843 [256/706] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:57.843 [257/706] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:57.843 [258/706] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:58.101 [259/706] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:58.101 [260/706] Linking static target lib/librte_gpudev.a 00:01:58.101 [261/706] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:58.101 [262/706] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:58.359 [263/706] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:58.359 [264/706] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:58.359 [265/706] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:58.359 [266/706] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:58.359 [267/706] Linking static target lib/librte_eventdev.a 00:01:58.359 [268/706] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:58.359 [269/706] Linking static target lib/librte_gro.a 00:01:58.618 [270/706] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:58.618 [271/706] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.618 [272/706] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:58.618 [273/706] Linking target lib/librte_cryptodev.so.24.0 00:01:58.618 [274/706] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.618 [275/706] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:58.876 [276/706] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:58.876 [277/706] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:58.876 [278/706] Linking static target lib/librte_gso.a 00:01:58.876 [279/706] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.135 [280/706] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.135 [281/706] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:59.135 [282/706] Linking target lib/librte_gpudev.so.24.0 00:01:59.135 [283/706] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:59.135 [284/706] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:59.135 [285/706] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:59.393 [286/706] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:59.394 [287/706] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:59.394 [288/706] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:59.394 [289/706] Linking static target lib/librte_jobstats.a 00:01:59.394 [290/706] Linking static target lib/librte_ip_frag.a 00:01:59.652 [291/706] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:59.652 [292/706] Linking static target lib/librte_latencystats.a 00:01:59.652 [293/706] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:59.652 [294/706] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:59.652 [295/706] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:59.652 [296/706] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:59.652 [297/706] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.652 [298/706] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.911 [299/706] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.911 [300/706] Linking target lib/librte_jobstats.so.24.0 00:01:59.911 [301/706] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:59.911 [302/706] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:00.171 [303/706] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:00.171 [304/706] Linking static target lib/librte_lpm.a 00:02:00.171 [305/706] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.171 [306/706] Linking target lib/librte_ethdev.so.24.0 00:02:00.171 [307/706] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:00.429 [308/706] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:00.429 [309/706] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:00.429 [310/706] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:00.429 [311/706] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.429 [312/706] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:00.429 [313/706] Linking target lib/librte_metrics.so.24.0 00:02:00.429 [314/706] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:00.429 [315/706] Linking target lib/librte_gro.so.24.0 00:02:00.429 [316/706] Linking target lib/librte_bpf.so.24.0 00:02:00.429 [317/706] Linking target lib/librte_gso.so.24.0 00:02:00.429 [318/706] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:00.429 [319/706] Linking static target lib/librte_pcapng.a 00:02:00.429 [320/706] Linking target lib/librte_ip_frag.so.24.0 00:02:00.429 [321/706] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:00.429 [322/706] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:00.429 [323/706] Linking target lib/librte_lpm.so.24.0 00:02:00.688 [324/706] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:00.688 [325/706] Linking target lib/librte_bitratestats.so.24.0 00:02:00.688 [326/706] Linking target lib/librte_latencystats.so.24.0 00:02:00.688 [327/706] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:00.688 [328/706] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:00.688 [329/706] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.946 [330/706] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:00.946 [331/706] Linking target lib/librte_pcapng.so.24.0 00:02:00.946 [332/706] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:00.946 [333/706] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:00.946 [334/706] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:00.946 [335/706] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:00.946 [336/706] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.205 [337/706] Linking target lib/librte_eventdev.so.24.0 00:02:01.205 [338/706] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:01.205 [339/706] Linking static target lib/librte_regexdev.a 00:02:01.205 [340/706] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:01.205 [341/706] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:01.205 [342/706] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:01.205 [343/706] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:01.205 [344/706] Linking static target lib/librte_rawdev.a 00:02:01.205 [345/706] Linking static target lib/librte_power.a 00:02:01.205 [346/706] Linking target lib/librte_dispatcher.so.24.0 00:02:01.464 [347/706] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:01.464 [348/706] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:01.464 [349/706] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:01.464 [350/706] Linking static target lib/librte_member.a 00:02:01.464 [351/706] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:01.464 [352/706] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:01.464 [353/706] Linking static target lib/librte_mldev.a 00:02:01.722 [354/706] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.722 [355/706] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.722 [356/706] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:01.722 [357/706] Linking target lib/librte_rawdev.so.24.0 00:02:01.722 [358/706] Linking target lib/librte_member.so.24.0 00:02:01.988 [359/706] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:01.988 [360/706] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:01.988 [361/706] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.988 [362/706] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:01.988 [363/706] Linking target lib/librte_power.so.24.0 00:02:01.988 [364/706] Linking static target lib/librte_reorder.a 00:02:01.988 [365/706] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.988 [366/706] Linking target lib/librte_regexdev.so.24.0 00:02:02.261 [367/706] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:02.261 [368/706] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:02.261 [369/706] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:02.261 [370/706] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:02.261 [371/706] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:02.261 [372/706] Linking static target lib/librte_rib.a 00:02:02.261 [373/706] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.261 [374/706] Linking target lib/librte_reorder.so.24.0 00:02:02.261 [375/706] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:02.261 [376/706] Linking static target lib/librte_stack.a 00:02:02.556 [377/706] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:02.556 [378/706] Linking static target lib/librte_security.a 00:02:02.556 [379/706] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:02.556 [380/706] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.556 [381/706] Linking target lib/librte_stack.so.24.0 00:02:02.814 [382/706] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:02.814 [383/706] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:02.814 [384/706] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.814 [385/706] Linking target lib/librte_rib.so.24.0 00:02:02.814 [386/706] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:02.814 [387/706] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.814 [388/706] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:02.814 [389/706] Linking target lib/librte_security.so.24.0 00:02:03.072 [390/706] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.072 [391/706] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:03.072 [392/706] Linking target lib/librte_mldev.so.24.0 00:02:03.072 [393/706] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:03.072 [394/706] Linking static target lib/librte_sched.a 00:02:03.331 [395/706] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:03.331 [396/706] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:03.589 [397/706] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:03.589 [398/706] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.589 [399/706] Linking target lib/librte_sched.so.24.0 00:02:03.589 [400/706] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:03.589 [401/706] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:03.848 [402/706] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:03.848 [403/706] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:04.106 [404/706] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:04.106 [405/706] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:04.106 [406/706] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:04.106 [407/706] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:04.365 [408/706] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:04.365 [409/706] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:04.365 [410/706] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:04.365 [411/706] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:04.365 [412/706] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:04.365 [413/706] Linking static target lib/librte_ipsec.a 00:02:04.624 [414/706] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:04.624 [415/706] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:04.624 [416/706] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:04.624 [417/706] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:04.624 [418/706] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:04.881 [419/706] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.139 [420/706] Linking target lib/librte_ipsec.so.24.0 00:02:05.139 [421/706] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:05.139 [422/706] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:05.397 [423/706] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:05.397 [424/706] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:05.397 [425/706] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:05.397 [426/706] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:05.397 [427/706] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:05.397 [428/706] Linking static target lib/librte_fib.a 00:02:05.654 [429/706] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:05.654 [430/706] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:05.655 [431/706] Linking static target lib/librte_pdcp.a 00:02:05.912 [432/706] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.912 [433/706] Linking target lib/librte_fib.so.24.0 00:02:05.912 [434/706] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:06.170 [435/706] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.170 [436/706] Linking target lib/librte_pdcp.so.24.0 00:02:06.170 [437/706] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:06.170 [438/706] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:06.170 [439/706] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:06.170 [440/706] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:06.427 [441/706] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:06.427 [442/706] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:06.686 [443/706] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:06.686 [444/706] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:06.686 [445/706] Linking static target lib/librte_port.a 00:02:06.686 [446/706] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:06.686 [447/706] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:06.686 [448/706] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:06.686 [449/706] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:06.943 [450/706] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:06.943 [451/706] Linking static target lib/librte_pdump.a 00:02:06.943 [452/706] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:06.943 [453/706] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:07.201 [454/706] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:07.201 [455/706] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.201 [456/706] Linking target lib/librte_port.so.24.0 00:02:07.201 [457/706] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.458 [458/706] Linking target lib/librte_pdump.so.24.0 00:02:07.458 [459/706] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:07.458 [460/706] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:07.716 [461/706] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:07.716 [462/706] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:07.716 [463/706] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:07.716 [464/706] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:07.716 [465/706] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:07.716 [466/706] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:07.974 [467/706] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:07.974 [468/706] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:07.974 [469/706] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:07.974 [470/706] Linking static target lib/librte_table.a 00:02:08.231 [471/706] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:08.490 [472/706] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:08.893 [473/706] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:08.893 [474/706] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.893 [475/706] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:08.893 [476/706] Linking target lib/librte_table.so.24.0 00:02:08.893 [477/706] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:08.893 [478/706] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:08.893 [479/706] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:09.151 [480/706] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:09.151 [481/706] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:09.151 [482/706] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:09.151 [483/706] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:09.410 [484/706] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:09.669 [485/706] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:09.669 [486/706] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:09.669 [487/706] Linking static target lib/librte_graph.a 00:02:09.669 [488/706] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:09.669 [489/706] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:09.927 [490/706] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:09.927 [491/706] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:10.186 [492/706] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:10.444 [493/706] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:10.444 [494/706] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.444 [495/706] Linking target lib/librte_graph.so.24.0 00:02:10.444 [496/706] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:10.444 [497/706] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:10.702 [498/706] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:10.702 [499/706] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:10.702 [500/706] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:10.702 [501/706] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:10.702 [502/706] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:10.702 [503/706] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:10.960 [504/706] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:10.960 [505/706] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:11.219 [506/706] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:11.219 [507/706] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:11.219 [508/706] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:11.219 [509/706] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:11.219 [510/706] Linking static target lib/librte_node.a 00:02:11.219 [511/706] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:11.219 [512/706] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:11.478 [513/706] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:11.478 [514/706] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.736 [515/706] Linking target lib/librte_node.so.24.0 00:02:11.736 [516/706] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:11.736 [517/706] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:11.736 [518/706] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:11.736 [519/706] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:11.736 [520/706] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:11.736 [521/706] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:11.736 [522/706] Linking static target drivers/librte_bus_vdev.a 00:02:11.995 [523/706] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:11.995 [524/706] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.995 [525/706] Linking static target drivers/librte_bus_pci.a 00:02:11.995 [526/706] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:11.995 [527/706] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.995 [528/706] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:11.995 [529/706] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:11.995 [530/706] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:12.253 [531/706] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.253 [532/706] Linking target drivers/librte_bus_vdev.so.24.0 00:02:12.253 [533/706] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:12.253 [534/706] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:12.253 [535/706] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:12.253 [536/706] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:12.512 [537/706] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.512 [538/706] Linking static target drivers/librte_mempool_ring.a 00:02:12.512 [539/706] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.512 [540/706] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:12.512 [541/706] Linking target drivers/librte_mempool_ring.so.24.0 00:02:12.512 [542/706] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.512 [543/706] Linking target drivers/librte_bus_pci.so.24.0 00:02:12.512 [544/706] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:12.770 [545/706] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:13.028 [546/706] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:13.286 [547/706] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:13.286 [548/706] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:13.552 [549/706] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:14.136 [550/706] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:14.136 [551/706] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:14.136 [552/706] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:14.136 [553/706] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:14.136 [554/706] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:14.136 [555/706] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:14.394 [556/706] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:14.652 [557/706] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:14.910 [558/706] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:14.910 [559/706] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:14.910 [560/706] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:15.168 [561/706] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:15.168 [562/706] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:15.168 [563/706] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:15.426 [564/706] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:15.685 [565/706] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:15.685 [566/706] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:15.685 [567/706] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:15.685 [568/706] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:15.943 [569/706] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:15.943 [570/706] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:15.943 [571/706] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:15.943 [572/706] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:16.201 [573/706] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:16.201 [574/706] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:16.201 [575/706] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:16.460 [576/706] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:16.460 [577/706] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:16.460 [578/706] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:16.718 [579/706] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:16.718 [580/706] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:16.718 [581/706] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:16.718 [582/706] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:16.976 [583/706] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:17.236 [584/706] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:17.236 [585/706] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:17.236 [586/706] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:17.236 [587/706] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:17.236 [588/706] Linking static target drivers/librte_net_i40e.a 00:02:17.236 [589/706] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:17.236 [590/706] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:17.802 [591/706] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:17.802 [592/706] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:17.802 [593/706] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:17.802 [594/706] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:18.060 [595/706] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:18.060 [596/706] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.060 [597/706] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:18.060 [598/706] Linking target drivers/librte_net_i40e.so.24.0 00:02:18.060 [599/706] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:18.358 [600/706] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:18.616 [601/706] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:18.616 [602/706] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:18.616 [603/706] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:18.616 [604/706] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:18.616 [605/706] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:18.616 [606/706] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:18.616 [607/706] Linking static target lib/librte_vhost.a 00:02:18.875 [608/706] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:18.875 [609/706] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:18.875 [610/706] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:19.133 [611/706] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:19.133 [612/706] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:19.133 [613/706] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:19.133 [614/706] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:19.391 [615/706] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:19.648 [616/706] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:19.648 [617/706] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:19.905 [618/706] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:20.470 [619/706] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:20.470 [620/706] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:20.470 [621/706] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:20.729 [622/706] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:20.729 [623/706] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.729 [624/706] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:20.729 [625/706] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:20.729 [626/706] Linking target lib/librte_vhost.so.24.0 00:02:20.987 [627/706] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:20.987 [628/706] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:20.987 [629/706] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:21.245 [630/706] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:21.245 [631/706] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:21.245 [632/706] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:21.245 [633/706] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:21.245 [634/706] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:21.504 [635/706] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:21.504 [636/706] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:21.504 [637/706] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:21.504 [638/706] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:21.762 [639/706] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:21.762 [640/706] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:21.762 [641/706] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:22.021 [642/706] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:22.021 [643/706] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:22.021 [644/706] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:22.021 [645/706] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:22.280 [646/706] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:22.280 [647/706] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:22.280 [648/706] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:22.539 [649/706] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:22.539 [650/706] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:22.539 [651/706] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:22.798 [652/706] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:22.798 [653/706] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:22.798 [654/706] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:23.056 [655/706] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:23.056 [656/706] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:23.315 [657/706] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:23.315 [658/706] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:23.315 [659/706] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:23.315 [660/706] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:23.315 [661/706] Linking static target lib/librte_pipeline.a 00:02:23.592 [662/706] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:23.851 [663/706] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:23.851 [664/706] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:23.851 [665/706] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:23.851 [666/706] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:24.107 [667/706] Linking target app/dpdk-graph 00:02:24.365 [668/706] Linking target app/dpdk-pdump 00:02:24.365 [669/706] Linking target app/dpdk-proc-info 00:02:24.365 [670/706] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:24.624 [671/706] Linking target app/dpdk-test-acl 00:02:24.624 [672/706] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:24.624 [673/706] Linking target app/dpdk-test-bbdev 00:02:24.882 [674/706] Linking target app/dpdk-test-cmdline 00:02:25.140 [675/706] Linking target app/dpdk-test-compress-perf 00:02:25.140 [676/706] Linking target app/dpdk-test-crypto-perf 00:02:25.140 [677/706] Linking target app/dpdk-test-dma-perf 00:02:25.140 [678/706] Linking target app/dpdk-test-fib 00:02:25.140 [679/706] Linking target app/dpdk-test-flow-perf 00:02:25.140 [680/706] Linking target app/dpdk-test-eventdev 00:02:25.399 [681/706] Linking target app/dpdk-test-gpudev 00:02:25.657 [682/706] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:25.657 [683/706] Linking target app/dpdk-test-pipeline 00:02:25.657 [684/706] Linking target app/dpdk-test-mldev 00:02:25.657 [685/706] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:25.914 [686/706] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:25.914 [687/706] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:25.914 [688/706] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:26.172 [689/706] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:26.172 [690/706] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:26.429 [691/706] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:26.430 [692/706] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:26.688 [693/706] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.688 [694/706] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:26.688 [695/706] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:26.688 [696/706] Linking target lib/librte_pipeline.so.24.0 00:02:26.688 [697/706] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:26.946 [698/706] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:26.946 [699/706] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:27.203 [700/706] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:27.462 [701/706] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:27.462 [702/706] Linking target app/dpdk-test-regex 00:02:27.462 [703/706] Linking target app/dpdk-test-sad 00:02:27.720 [704/706] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:27.991 [705/706] Linking target app/dpdk-test-security-perf 00:02:28.585 [706/706] Linking target app/dpdk-testpmd 00:02:28.585 20:58:51 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:28.585 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:28.585 [0/1] Installing files. 00:02:29.155 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:29.155 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:29.155 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:29.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.157 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.158 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:29.159 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:29.160 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:29.160 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.160 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.161 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.099 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.099 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:30.099 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.099 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:30.099 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.099 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:30.099 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.099 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:30.099 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.099 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.100 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.101 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.101 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.101 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.101 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.101 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:30.101 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:30.101 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:02:30.101 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:30.101 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:02:30.101 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:30.101 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:02:30.101 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:30.101 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:02:30.101 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:30.101 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:02:30.101 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:30.101 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:02:30.101 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:30.101 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:02:30.101 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:30.101 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:02:30.101 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:30.101 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:02:30.101 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:30.101 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:02:30.101 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:30.101 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:02:30.101 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:30.101 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:02:30.101 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:30.101 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:02:30.101 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:30.101 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:02:30.101 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:30.101 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:02:30.101 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:30.101 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:02:30.101 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:30.101 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:02:30.101 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:30.101 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:02:30.101 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:30.101 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:02:30.101 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:30.101 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:02:30.101 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:30.101 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:02:30.101 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:30.101 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:02:30.101 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:30.101 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:02:30.101 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:30.101 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:02:30.101 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:30.101 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:02:30.101 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:30.101 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:02:30.101 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:30.101 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:02:30.101 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:30.101 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:02:30.101 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:30.101 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:02:30.101 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:30.101 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:02:30.101 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:30.101 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:02:30.101 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:30.101 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:02:30.101 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:30.101 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:02:30.101 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:30.101 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:02:30.101 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:30.101 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:02:30.101 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:30.101 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:02:30.101 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:30.101 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:02:30.101 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:30.101 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:02:30.101 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:30.101 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:02:30.101 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:30.101 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:02:30.101 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:30.101 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:02:30.101 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:30.101 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:02:30.101 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:30.101 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:02:30.101 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:30.101 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:02:30.101 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:30.101 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:02:30.101 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:30.101 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:02:30.101 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:30.101 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:02:30.101 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:30.101 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:30.101 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:30.101 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:30.101 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:30.101 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:30.101 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:30.101 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:30.101 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:30.101 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:30.101 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:30.101 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:30.101 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:30.101 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:02:30.101 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:30.101 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:02:30.101 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:30.101 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:02:30.101 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:30.101 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:02:30.101 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:30.101 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:02:30.101 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:30.101 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:02:30.101 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:30.101 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:02:30.101 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:30.101 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:02:30.101 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:30.101 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:02:30.101 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:30.101 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:30.101 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:30.101 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:30.101 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:30.101 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:30.101 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:30.101 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:30.101 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:30.101 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:30.101 20:58:52 -- common/autobuild_common.sh@189 -- $ uname -s 00:02:30.102 20:58:52 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:30.102 20:58:52 -- common/autobuild_common.sh@200 -- $ cat 00:02:30.102 20:58:52 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:30.102 00:02:30.102 real 0m52.932s 00:02:30.102 user 6m2.433s 00:02:30.102 sys 0m54.188s 00:02:30.102 20:58:52 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:30.102 ************************************ 00:02:30.102 END TEST build_native_dpdk 00:02:30.102 ************************************ 00:02:30.102 20:58:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.102 20:58:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:30.102 20:58:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:30.102 20:58:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:30.102 20:58:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:30.102 20:58:52 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:30.102 20:58:52 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:30.102 20:58:52 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:02:30.102 20:58:52 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:30.102 20:58:52 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:30.102 20:58:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.102 ************************************ 00:02:30.102 START TEST unittest_build 00:02:30.102 ************************************ 00:02:30.102 20:58:52 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:02:30.102 20:58:52 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared 00:02:30.359 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:30.359 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.359 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:30.359 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:30.628 Using 'verbs' RDMA provider 00:02:46.070 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:00.952 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:00.952 Creating mk/config.mk...done. 00:03:00.952 Creating mk/cc.flags.mk...done. 00:03:00.952 Type 'make' to build. 00:03:00.952 20:59:21 -- common/autobuild_common.sh@403 -- $ make -j10 00:03:00.952 make[1]: Nothing to be done for 'all'. 00:03:01.519 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:01.778 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:01.778 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:01.778 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:01.778 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:01.778 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:01.778 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.037 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.037 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.037 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.037 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.037 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.295 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.295 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.295 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.295 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.295 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.295 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.553 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.553 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.553 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.553 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.553 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.553 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.811 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.811 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.811 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.811 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.811 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:02.811 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.069 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.327 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.585 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.585 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.585 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.585 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.585 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.585 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.585 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.843 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.843 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.843 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.843 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.843 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.843 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.843 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:03.843 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.102 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.102 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.102 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.102 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.102 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.102 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.102 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.102 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.102 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.360 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.360 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.360 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.360 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.360 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.360 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.360 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.618 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.618 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.618 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.618 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.876 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.876 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.876 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.876 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:04.876 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.134 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.134 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.134 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.134 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.134 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.134 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.134 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.134 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.392 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.392 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.392 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.392 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.392 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.392 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.392 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.392 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.650 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.650 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.910 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.910 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.910 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.910 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:05.910 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.168 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.168 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.168 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.168 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.427 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.427 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.427 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.685 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.685 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.685 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.685 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.685 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.685 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.944 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.944 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.944 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.944 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.944 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.944 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.944 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.944 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.944 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:06.944 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.203 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.203 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.203 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.203 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.461 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.461 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.461 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.461 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.720 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.720 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.720 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.720 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.720 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.979 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.979 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.979 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.979 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.979 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.979 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.979 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:07.979 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.236 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.236 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.236 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.236 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.236 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.236 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.236 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.236 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.236 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.493 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.493 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.494 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.494 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.494 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.494 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.494 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.494 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.751 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.751 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.751 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.751 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.751 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.751 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.751 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.751 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:08.751 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.009 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.009 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.267 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.267 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.267 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.525 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.525 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.525 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.525 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.783 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.783 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.783 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.783 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.783 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.783 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.783 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.783 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:09.783 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.041 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.041 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.041 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.041 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.041 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.041 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.041 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.041 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.299 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.299 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.299 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.299 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.299 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.299 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.299 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.557 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.557 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.557 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.815 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.815 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.815 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:10.815 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.074 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.074 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.074 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.074 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.332 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.332 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.332 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.333 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.333 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.333 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.333 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.628 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.628 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.628 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.628 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.628 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.628 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.628 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.628 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.628 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.887 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.887 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:11.887 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:12.147 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:12.405 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:12.405 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:12.405 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:12.663 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:12.663 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:13.230 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:13.230 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:13.230 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:13.230 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:13.488 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:13.488 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:13.488 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:13.746 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:13.746 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:13.746 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:13.746 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.004 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.004 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.004 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.004 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.263 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.263 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.263 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.263 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.263 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.263 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.521 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.521 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.521 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.521 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.779 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.779 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.779 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.779 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:14.779 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.037 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.037 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.295 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.295 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.295 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.295 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.295 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.554 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.554 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.554 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.554 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.554 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.812 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.812 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.812 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:15.812 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:17.752 CC lib/ut/ut.o 00:03:17.752 CC lib/ut_mock/mock.o 00:03:17.752 CC lib/log/log.o 00:03:17.752 CC lib/log/log_flags.o 00:03:17.752 CC lib/log/log_deprecated.o 00:03:17.752 LIB libspdk_ut_mock.a 00:03:17.752 LIB libspdk_log.a 00:03:17.752 LIB libspdk_ut.a 00:03:18.010 CC lib/dma/dma.o 00:03:18.010 CC lib/ioat/ioat.o 00:03:18.010 CXX lib/trace_parser/trace.o 00:03:18.010 CC lib/util/bit_array.o 00:03:18.010 CC lib/util/base64.o 00:03:18.010 CC lib/util/cpuset.o 00:03:18.010 CC lib/util/crc32c.o 00:03:18.010 CC lib/util/crc32.o 00:03:18.010 CC lib/util/crc16.o 00:03:18.010 CC lib/vfio_user/host/vfio_user_pci.o 00:03:18.268 CC lib/util/crc32_ieee.o 00:03:18.268 CC lib/util/crc64.o 00:03:18.268 CC lib/util/dif.o 00:03:18.268 CC lib/vfio_user/host/vfio_user.o 00:03:18.268 LIB libspdk_dma.a 00:03:18.268 CC lib/util/fd.o 00:03:18.268 CC lib/util/file.o 00:03:18.268 CC lib/util/hexlify.o 00:03:18.268 CC lib/util/iov.o 00:03:18.268 CC lib/util/math.o 00:03:18.268 LIB libspdk_ioat.a 00:03:18.526 CC lib/util/pipe.o 00:03:18.526 CC lib/util/strerror_tls.o 00:03:18.526 CC lib/util/string.o 00:03:18.526 LIB libspdk_vfio_user.a 00:03:18.526 CC lib/util/uuid.o 00:03:18.526 CC lib/util/fd_group.o 00:03:18.526 CC lib/util/xor.o 00:03:18.526 CC lib/util/zipf.o 00:03:19.093 LIB libspdk_util.a 00:03:19.093 CC lib/rdma/common.o 00:03:19.093 CC lib/json/json_parse.o 00:03:19.093 CC lib/rdma/rdma_verbs.o 00:03:19.093 CC lib/idxd/idxd.o 00:03:19.093 CC lib/json/json_util.o 00:03:19.093 CC lib/conf/conf.o 00:03:19.093 CC lib/idxd/idxd_user.o 00:03:19.093 CC lib/vmd/vmd.o 00:03:19.093 CC lib/env_dpdk/env.o 00:03:19.350 LIB libspdk_trace_parser.a 00:03:19.350 CC lib/env_dpdk/memory.o 00:03:19.350 LIB libspdk_conf.a 00:03:19.350 CC lib/json/json_write.o 00:03:19.350 CC lib/env_dpdk/pci.o 00:03:19.350 CC lib/env_dpdk/init.o 00:03:19.350 CC lib/env_dpdk/threads.o 00:03:19.350 LIB libspdk_rdma.a 00:03:19.350 CC lib/env_dpdk/pci_ioat.o 00:03:19.608 CC lib/env_dpdk/pci_virtio.o 00:03:19.608 CC lib/env_dpdk/pci_vmd.o 00:03:19.608 CC lib/env_dpdk/pci_idxd.o 00:03:19.608 CC lib/env_dpdk/pci_event.o 00:03:19.608 LIB libspdk_json.a 00:03:19.608 CC lib/vmd/led.o 00:03:19.867 CC lib/env_dpdk/sigbus_handler.o 00:03:19.867 CC lib/env_dpdk/pci_dpdk.o 00:03:19.867 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:19.867 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:19.867 LIB libspdk_idxd.a 00:03:19.867 CC lib/jsonrpc/jsonrpc_server.o 00:03:19.867 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:19.867 CC lib/jsonrpc/jsonrpc_client.o 00:03:19.867 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:19.867 LIB libspdk_vmd.a 00:03:20.126 LIB libspdk_jsonrpc.a 00:03:20.385 CC lib/rpc/rpc.o 00:03:20.642 LIB libspdk_rpc.a 00:03:20.642 LIB libspdk_env_dpdk.a 00:03:20.642 CC lib/notify/notify.o 00:03:20.642 CC lib/notify/notify_rpc.o 00:03:20.642 CC lib/trace/trace.o 00:03:20.642 CC lib/trace/trace_flags.o 00:03:20.642 CC lib/trace/trace_rpc.o 00:03:20.642 CC lib/sock/sock.o 00:03:20.642 CC lib/sock/sock_rpc.o 00:03:20.899 LIB libspdk_notify.a 00:03:20.899 LIB libspdk_trace.a 00:03:21.157 CC lib/thread/iobuf.o 00:03:21.157 CC lib/thread/thread.o 00:03:21.157 LIB libspdk_sock.a 00:03:21.157 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:21.157 CC lib/nvme/nvme_fabric.o 00:03:21.157 CC lib/nvme/nvme_ctrlr.o 00:03:21.157 CC lib/nvme/nvme_pcie_common.o 00:03:21.157 CC lib/nvme/nvme_ns_cmd.o 00:03:21.157 CC lib/nvme/nvme_ns.o 00:03:21.157 CC lib/nvme/nvme_pcie.o 00:03:21.157 CC lib/nvme/nvme_qpair.o 00:03:21.415 CC lib/nvme/nvme.o 00:03:21.673 CC lib/nvme/nvme_quirks.o 00:03:21.932 CC lib/nvme/nvme_transport.o 00:03:21.932 CC lib/nvme/nvme_discovery.o 00:03:21.932 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:21.932 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:21.932 CC lib/nvme/nvme_tcp.o 00:03:22.190 CC lib/nvme/nvme_opal.o 00:03:22.190 CC lib/nvme/nvme_io_msg.o 00:03:22.190 CC lib/nvme/nvme_poll_group.o 00:03:22.448 CC lib/nvme/nvme_zns.o 00:03:22.448 CC lib/nvme/nvme_cuse.o 00:03:22.448 CC lib/nvme/nvme_vfio_user.o 00:03:22.448 CC lib/nvme/nvme_rdma.o 00:03:22.706 LIB libspdk_thread.a 00:03:22.964 CC lib/init/json_config.o 00:03:22.964 CC lib/init/subsystem.o 00:03:22.964 CC lib/virtio/virtio.o 00:03:22.964 CC lib/accel/accel.o 00:03:22.964 CC lib/blob/blobstore.o 00:03:22.964 CC lib/blob/request.o 00:03:23.222 CC lib/blob/zeroes.o 00:03:23.222 CC lib/init/subsystem_rpc.o 00:03:23.222 CC lib/blob/blob_bs_dev.o 00:03:23.222 CC lib/virtio/virtio_vhost_user.o 00:03:23.222 CC lib/init/rpc.o 00:03:23.222 CC lib/virtio/virtio_vfio_user.o 00:03:23.479 CC lib/accel/accel_rpc.o 00:03:23.479 LIB libspdk_init.a 00:03:23.479 CC lib/accel/accel_sw.o 00:03:23.479 CC lib/virtio/virtio_pci.o 00:03:23.737 CC lib/event/app.o 00:03:23.737 CC lib/event/reactor.o 00:03:23.737 CC lib/event/log_rpc.o 00:03:23.737 CC lib/event/app_rpc.o 00:03:23.737 CC lib/event/scheduler_static.o 00:03:23.994 LIB libspdk_virtio.a 00:03:23.994 LIB libspdk_nvme.a 00:03:24.253 LIB libspdk_accel.a 00:03:24.253 LIB libspdk_event.a 00:03:24.253 CC lib/bdev/bdev.o 00:03:24.253 CC lib/bdev/bdev_rpc.o 00:03:24.253 CC lib/bdev/bdev_zone.o 00:03:24.253 CC lib/bdev/part.o 00:03:24.253 CC lib/bdev/scsi_nvme.o 00:03:26.783 LIB libspdk_blob.a 00:03:27.098 CC lib/lvol/lvol.o 00:03:27.098 CC lib/blobfs/blobfs.o 00:03:27.098 CC lib/blobfs/tree.o 00:03:27.666 LIB libspdk_bdev.a 00:03:27.666 CC lib/ftl/ftl_core.o 00:03:27.666 CC lib/nbd/nbd.o 00:03:27.666 CC lib/ftl/ftl_init.o 00:03:27.666 CC lib/nbd/nbd_rpc.o 00:03:27.666 CC lib/scsi/dev.o 00:03:27.666 CC lib/ftl/ftl_layout.o 00:03:27.666 CC lib/scsi/lun.o 00:03:27.666 CC lib/nvmf/ctrlr.o 00:03:27.924 CC lib/ftl/ftl_debug.o 00:03:27.924 CC lib/ftl/ftl_io.o 00:03:27.924 CC lib/ftl/ftl_sb.o 00:03:28.183 LIB libspdk_blobfs.a 00:03:28.183 CC lib/ftl/ftl_l2p.o 00:03:28.183 CC lib/ftl/ftl_l2p_flat.o 00:03:28.183 CC lib/scsi/port.o 00:03:28.183 LIB libspdk_lvol.a 00:03:28.183 CC lib/scsi/scsi.o 00:03:28.183 CC lib/nvmf/ctrlr_discovery.o 00:03:28.183 LIB libspdk_nbd.a 00:03:28.183 CC lib/nvmf/ctrlr_bdev.o 00:03:28.183 CC lib/ftl/ftl_nv_cache.o 00:03:28.183 CC lib/nvmf/subsystem.o 00:03:28.183 CC lib/nvmf/nvmf.o 00:03:28.183 CC lib/scsi/scsi_bdev.o 00:03:28.183 CC lib/scsi/scsi_pr.o 00:03:28.441 CC lib/scsi/scsi_rpc.o 00:03:28.441 CC lib/scsi/task.o 00:03:28.441 CC lib/nvmf/nvmf_rpc.o 00:03:28.699 CC lib/nvmf/transport.o 00:03:28.699 CC lib/nvmf/tcp.o 00:03:28.699 CC lib/ftl/ftl_band.o 00:03:28.958 LIB libspdk_scsi.a 00:03:28.958 CC lib/ftl/ftl_band_ops.o 00:03:28.958 CC lib/ftl/ftl_writer.o 00:03:29.217 CC lib/iscsi/conn.o 00:03:29.217 CC lib/iscsi/init_grp.o 00:03:29.217 CC lib/iscsi/iscsi.o 00:03:29.217 CC lib/iscsi/md5.o 00:03:29.217 CC lib/iscsi/param.o 00:03:29.476 CC lib/ftl/ftl_rq.o 00:03:29.476 CC lib/iscsi/portal_grp.o 00:03:29.476 CC lib/iscsi/tgt_node.o 00:03:29.476 CC lib/iscsi/iscsi_subsystem.o 00:03:29.476 CC lib/ftl/ftl_reloc.o 00:03:29.476 CC lib/ftl/ftl_l2p_cache.o 00:03:29.734 CC lib/iscsi/iscsi_rpc.o 00:03:29.734 CC lib/iscsi/task.o 00:03:29.734 CC lib/ftl/ftl_p2l.o 00:03:29.993 CC lib/ftl/mngt/ftl_mngt.o 00:03:29.993 CC lib/nvmf/rdma.o 00:03:29.993 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:29.993 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:29.993 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:30.252 CC lib/vhost/vhost.o 00:03:30.252 CC lib/vhost/vhost_rpc.o 00:03:30.252 CC lib/vhost/vhost_scsi.o 00:03:30.252 CC lib/vhost/vhost_blk.o 00:03:30.252 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:30.252 CC lib/vhost/rte_vhost_user.o 00:03:30.252 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:30.518 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:30.518 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:30.778 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:30.778 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:30.778 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:30.778 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:30.778 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:30.778 CC lib/ftl/utils/ftl_conf.o 00:03:31.036 LIB libspdk_iscsi.a 00:03:31.036 CC lib/ftl/utils/ftl_md.o 00:03:31.036 CC lib/ftl/utils/ftl_mempool.o 00:03:31.036 CC lib/ftl/utils/ftl_bitmap.o 00:03:31.036 CC lib/ftl/utils/ftl_property.o 00:03:31.036 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:31.036 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:31.036 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:31.295 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:31.295 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:31.295 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:31.295 LIB libspdk_vhost.a 00:03:31.295 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:31.295 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:31.295 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:31.295 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:31.295 CC lib/ftl/base/ftl_base_dev.o 00:03:31.295 CC lib/ftl/base/ftl_base_bdev.o 00:03:31.295 CC lib/ftl/ftl_trace.o 00:03:31.862 LIB libspdk_ftl.a 00:03:32.429 LIB libspdk_nvmf.a 00:03:32.687 CC module/env_dpdk/env_dpdk_rpc.o 00:03:32.945 CC module/sock/posix/posix.o 00:03:32.945 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:32.945 CC module/blob/bdev/blob_bdev.o 00:03:32.945 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:32.945 CC module/accel/dsa/accel_dsa.o 00:03:32.945 CC module/accel/error/accel_error.o 00:03:32.945 CC module/accel/iaa/accel_iaa.o 00:03:32.945 CC module/accel/ioat/accel_ioat.o 00:03:32.945 CC module/scheduler/gscheduler/gscheduler.o 00:03:32.945 LIB libspdk_env_dpdk_rpc.a 00:03:32.945 CC module/accel/ioat/accel_ioat_rpc.o 00:03:32.945 LIB libspdk_scheduler_dpdk_governor.a 00:03:32.945 LIB libspdk_scheduler_gscheduler.a 00:03:32.945 LIB libspdk_scheduler_dynamic.a 00:03:32.946 CC module/accel/iaa/accel_iaa_rpc.o 00:03:32.946 CC module/accel/dsa/accel_dsa_rpc.o 00:03:32.946 CC module/accel/error/accel_error_rpc.o 00:03:33.204 LIB libspdk_accel_ioat.a 00:03:33.204 LIB libspdk_blob_bdev.a 00:03:33.204 LIB libspdk_accel_iaa.a 00:03:33.204 LIB libspdk_accel_dsa.a 00:03:33.204 LIB libspdk_accel_error.a 00:03:33.204 CC module/blobfs/bdev/blobfs_bdev.o 00:03:33.204 CC module/bdev/lvol/vbdev_lvol.o 00:03:33.204 CC module/bdev/gpt/gpt.o 00:03:33.204 CC module/bdev/delay/vbdev_delay.o 00:03:33.204 CC module/bdev/error/vbdev_error.o 00:03:33.204 CC module/bdev/nvme/bdev_nvme.o 00:03:33.204 CC module/bdev/malloc/bdev_malloc.o 00:03:33.204 CC module/bdev/null/bdev_null.o 00:03:33.462 CC module/bdev/passthru/vbdev_passthru.o 00:03:33.462 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:33.462 CC module/bdev/gpt/vbdev_gpt.o 00:03:33.721 CC module/bdev/error/vbdev_error_rpc.o 00:03:33.721 CC module/bdev/null/bdev_null_rpc.o 00:03:33.721 LIB libspdk_blobfs_bdev.a 00:03:33.721 LIB libspdk_sock_posix.a 00:03:33.721 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:33.721 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:33.721 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:33.721 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:33.721 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:33.721 LIB libspdk_bdev_error.a 00:03:33.721 LIB libspdk_bdev_null.a 00:03:33.721 LIB libspdk_bdev_gpt.a 00:03:33.721 CC module/bdev/nvme/nvme_rpc.o 00:03:33.979 LIB libspdk_bdev_passthru.a 00:03:33.979 LIB libspdk_bdev_malloc.a 00:03:33.979 CC module/bdev/raid/bdev_raid.o 00:03:33.979 CC module/bdev/raid/bdev_raid_rpc.o 00:03:33.979 CC module/bdev/split/vbdev_split.o 00:03:33.979 LIB libspdk_bdev_delay.a 00:03:33.979 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:33.979 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:33.979 CC module/bdev/aio/bdev_aio.o 00:03:33.979 CC module/bdev/nvme/bdev_mdns_client.o 00:03:33.979 LIB libspdk_bdev_lvol.a 00:03:34.238 CC module/bdev/aio/bdev_aio_rpc.o 00:03:34.238 CC module/bdev/raid/bdev_raid_sb.o 00:03:34.238 CC module/bdev/raid/raid0.o 00:03:34.238 CC module/bdev/split/vbdev_split_rpc.o 00:03:34.238 CC module/bdev/raid/raid1.o 00:03:34.238 CC module/bdev/raid/concat.o 00:03:34.238 LIB libspdk_bdev_zone_block.a 00:03:34.238 LIB libspdk_bdev_split.a 00:03:34.496 LIB libspdk_bdev_aio.a 00:03:34.496 CC module/bdev/raid/raid5f.o 00:03:34.496 CC module/bdev/nvme/vbdev_opal.o 00:03:34.496 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:34.496 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:34.496 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:34.496 CC module/bdev/ftl/bdev_ftl.o 00:03:34.496 CC module/bdev/iscsi/bdev_iscsi.o 00:03:34.496 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:34.496 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:34.754 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:34.754 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:34.754 LIB libspdk_bdev_ftl.a 00:03:35.013 LIB libspdk_bdev_raid.a 00:03:35.013 LIB libspdk_bdev_iscsi.a 00:03:35.271 LIB libspdk_bdev_virtio.a 00:03:35.838 LIB libspdk_bdev_nvme.a 00:03:36.097 CC module/event/subsystems/sock/sock.o 00:03:36.097 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:36.097 CC module/event/subsystems/vmd/vmd.o 00:03:36.097 CC module/event/subsystems/iobuf/iobuf.o 00:03:36.097 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:36.097 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:36.097 CC module/event/subsystems/scheduler/scheduler.o 00:03:36.356 LIB libspdk_event_vhost_blk.a 00:03:36.356 LIB libspdk_event_sock.a 00:03:36.356 LIB libspdk_event_iobuf.a 00:03:36.356 LIB libspdk_event_vmd.a 00:03:36.356 LIB libspdk_event_scheduler.a 00:03:36.356 CC module/event/subsystems/accel/accel.o 00:03:36.615 LIB libspdk_event_accel.a 00:03:36.615 CC module/event/subsystems/bdev/bdev.o 00:03:36.874 LIB libspdk_event_bdev.a 00:03:36.874 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:36.874 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:36.874 CC module/event/subsystems/scsi/scsi.o 00:03:36.874 CC module/event/subsystems/nbd/nbd.o 00:03:37.132 LIB libspdk_event_nbd.a 00:03:37.132 LIB libspdk_event_scsi.a 00:03:37.133 LIB libspdk_event_nvmf.a 00:03:37.390 CC module/event/subsystems/iscsi/iscsi.o 00:03:37.390 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:37.390 LIB libspdk_event_vhost_scsi.a 00:03:37.390 LIB libspdk_event_iscsi.a 00:03:37.648 CXX app/trace/trace.o 00:03:37.648 CC examples/ioat/perf/perf.o 00:03:37.648 CC examples/accel/perf/accel_perf.o 00:03:37.648 CC examples/nvme/hello_world/hello_world.o 00:03:37.648 CC test/blobfs/mkfs/mkfs.o 00:03:37.648 CC examples/blob/hello_world/hello_blob.o 00:03:37.648 CC test/app/bdev_svc/bdev_svc.o 00:03:37.648 CC test/accel/dif/dif.o 00:03:37.648 CC test/bdev/bdevio/bdevio.o 00:03:37.648 CC examples/bdev/hello_world/hello_bdev.o 00:03:37.907 LINK mkfs 00:03:37.907 LINK bdev_svc 00:03:37.907 LINK ioat_perf 00:03:37.907 LINK hello_world 00:03:38.180 LINK hello_blob 00:03:38.180 LINK hello_bdev 00:03:38.180 LINK spdk_trace 00:03:38.180 LINK accel_perf 00:03:38.180 LINK dif 00:03:38.180 LINK bdevio 00:03:38.449 CC app/trace_record/trace_record.o 00:03:38.707 CC examples/ioat/verify/verify.o 00:03:38.707 LINK spdk_trace_record 00:03:38.964 LINK verify 00:03:39.222 CC examples/nvme/reconnect/reconnect.o 00:03:39.222 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:39.222 CC app/nvmf_tgt/nvmf_main.o 00:03:39.480 LINK reconnect 00:03:39.480 LINK nvmf_tgt 00:03:39.480 CC test/app/histogram_perf/histogram_perf.o 00:03:39.747 LINK nvme_fuzz 00:03:39.747 LINK histogram_perf 00:03:40.688 CC examples/sock/hello_world/hello_sock.o 00:03:40.688 CC examples/bdev/bdevperf/bdevperf.o 00:03:40.688 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:40.688 CC examples/vmd/lsvmd/lsvmd.o 00:03:40.688 CC examples/vmd/led/led.o 00:03:40.688 LINK hello_sock 00:03:40.946 LINK lsvmd 00:03:40.946 LINK led 00:03:40.946 CC examples/nvmf/nvmf/nvmf.o 00:03:40.946 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:40.946 CC examples/util/zipf/zipf.o 00:03:41.204 LINK nvme_manage 00:03:41.204 LINK zipf 00:03:41.204 CC examples/blob/cli/blobcli.o 00:03:41.204 LINK nvmf 00:03:41.204 LINK bdevperf 00:03:41.463 CC test/app/jsoncat/jsoncat.o 00:03:41.722 LINK jsoncat 00:03:41.722 CC test/app/stub/stub.o 00:03:41.722 LINK blobcli 00:03:41.980 CC examples/thread/thread/thread_ex.o 00:03:41.980 LINK stub 00:03:41.980 CC examples/idxd/perf/perf.o 00:03:42.239 LINK thread 00:03:42.239 CC examples/nvme/arbitration/arbitration.o 00:03:42.239 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:42.497 LINK idxd_perf 00:03:42.497 LINK interrupt_tgt 00:03:42.754 LINK arbitration 00:03:42.754 CC app/iscsi_tgt/iscsi_tgt.o 00:03:43.010 LINK iscsi_fuzz 00:03:43.010 LINK iscsi_tgt 00:03:43.010 CC app/spdk_tgt/spdk_tgt.o 00:03:43.267 CC app/spdk_lspci/spdk_lspci.o 00:03:43.267 LINK spdk_tgt 00:03:43.524 LINK spdk_lspci 00:03:43.782 CC app/spdk_nvme_perf/perf.o 00:03:43.782 CC examples/nvme/hotplug/hotplug.o 00:03:44.713 LINK hotplug 00:03:44.713 CC app/spdk_nvme_identify/identify.o 00:03:44.713 CC app/spdk_nvme_discover/discovery_aer.o 00:03:44.713 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:44.713 CC app/spdk_top/spdk_top.o 00:03:44.713 TEST_HEADER include/spdk/accel_module.h 00:03:44.713 TEST_HEADER include/spdk/bit_pool.h 00:03:44.713 TEST_HEADER include/spdk/ioat.h 00:03:44.713 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:44.713 TEST_HEADER include/spdk/blobfs.h 00:03:44.713 TEST_HEADER include/spdk/notify.h 00:03:44.713 LINK spdk_nvme_discover 00:03:44.970 TEST_HEADER include/spdk/pipe.h 00:03:44.970 TEST_HEADER include/spdk/accel.h 00:03:44.970 TEST_HEADER include/spdk/file.h 00:03:44.970 TEST_HEADER include/spdk/version.h 00:03:44.970 TEST_HEADER include/spdk/trace_parser.h 00:03:44.970 TEST_HEADER include/spdk/opal_spec.h 00:03:44.970 TEST_HEADER include/spdk/uuid.h 00:03:44.970 TEST_HEADER include/spdk/likely.h 00:03:44.970 TEST_HEADER include/spdk/dif.h 00:03:44.970 TEST_HEADER include/spdk/memory.h 00:03:44.970 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:44.970 TEST_HEADER include/spdk/dma.h 00:03:44.970 TEST_HEADER include/spdk/nbd.h 00:03:44.970 TEST_HEADER include/spdk/conf.h 00:03:44.970 TEST_HEADER include/spdk/env_dpdk.h 00:03:44.970 TEST_HEADER include/spdk/nvmf_spec.h 00:03:44.970 TEST_HEADER include/spdk/iscsi_spec.h 00:03:44.970 TEST_HEADER include/spdk/mmio.h 00:03:44.970 TEST_HEADER include/spdk/json.h 00:03:44.970 TEST_HEADER include/spdk/opal.h 00:03:44.970 TEST_HEADER include/spdk/bdev.h 00:03:44.970 TEST_HEADER include/spdk/base64.h 00:03:44.970 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:44.970 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:44.970 TEST_HEADER include/spdk/fd.h 00:03:44.970 TEST_HEADER include/spdk/barrier.h 00:03:44.970 TEST_HEADER include/spdk/scsi_spec.h 00:03:44.970 TEST_HEADER include/spdk/zipf.h 00:03:44.970 TEST_HEADER include/spdk/nvmf.h 00:03:44.970 TEST_HEADER include/spdk/queue.h 00:03:44.970 TEST_HEADER include/spdk/xor.h 00:03:44.971 TEST_HEADER include/spdk/cpuset.h 00:03:44.971 TEST_HEADER include/spdk/thread.h 00:03:44.971 TEST_HEADER include/spdk/bdev_zone.h 00:03:44.971 TEST_HEADER include/spdk/fd_group.h 00:03:44.971 TEST_HEADER include/spdk/tree.h 00:03:44.971 TEST_HEADER include/spdk/blob_bdev.h 00:03:44.971 TEST_HEADER include/spdk/crc64.h 00:03:44.971 TEST_HEADER include/spdk/assert.h 00:03:44.971 LINK spdk_nvme_perf 00:03:44.971 TEST_HEADER include/spdk/nvme_spec.h 00:03:44.971 TEST_HEADER include/spdk/endian.h 00:03:44.971 TEST_HEADER include/spdk/pci_ids.h 00:03:44.971 TEST_HEADER include/spdk/log.h 00:03:44.971 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:44.971 TEST_HEADER include/spdk/ftl.h 00:03:44.971 TEST_HEADER include/spdk/config.h 00:03:44.971 TEST_HEADER include/spdk/vhost.h 00:03:44.971 TEST_HEADER include/spdk/bdev_module.h 00:03:44.971 TEST_HEADER include/spdk/nvme_intel.h 00:03:44.971 TEST_HEADER include/spdk/idxd_spec.h 00:03:44.971 TEST_HEADER include/spdk/crc16.h 00:03:44.971 TEST_HEADER include/spdk/nvme.h 00:03:44.971 TEST_HEADER include/spdk/stdinc.h 00:03:44.971 TEST_HEADER include/spdk/scsi.h 00:03:44.971 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:44.971 TEST_HEADER include/spdk/idxd.h 00:03:44.971 TEST_HEADER include/spdk/hexlify.h 00:03:44.971 TEST_HEADER include/spdk/reduce.h 00:03:44.971 TEST_HEADER include/spdk/crc32.h 00:03:44.971 TEST_HEADER include/spdk/init.h 00:03:44.971 TEST_HEADER include/spdk/nvmf_transport.h 00:03:44.971 TEST_HEADER include/spdk/nvme_zns.h 00:03:44.971 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:44.971 TEST_HEADER include/spdk/util.h 00:03:44.971 TEST_HEADER include/spdk/jsonrpc.h 00:03:44.971 TEST_HEADER include/spdk/env.h 00:03:44.971 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:44.971 TEST_HEADER include/spdk/lvol.h 00:03:44.971 TEST_HEADER include/spdk/histogram_data.h 00:03:44.971 TEST_HEADER include/spdk/event.h 00:03:44.971 TEST_HEADER include/spdk/trace.h 00:03:44.971 TEST_HEADER include/spdk/ioat_spec.h 00:03:44.971 TEST_HEADER include/spdk/string.h 00:03:44.971 TEST_HEADER include/spdk/ublk.h 00:03:44.971 TEST_HEADER include/spdk/bit_array.h 00:03:44.971 TEST_HEADER include/spdk/scheduler.h 00:03:44.971 TEST_HEADER include/spdk/blob.h 00:03:44.971 TEST_HEADER include/spdk/gpt_spec.h 00:03:44.971 TEST_HEADER include/spdk/sock.h 00:03:44.971 TEST_HEADER include/spdk/vmd.h 00:03:44.971 TEST_HEADER include/spdk/rpc.h 00:03:44.971 CXX test/cpp_headers/accel_module.o 00:03:45.229 CXX test/cpp_headers/bit_pool.o 00:03:45.229 CXX test/cpp_headers/ioat.o 00:03:45.229 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:45.229 LINK vhost_fuzz 00:03:45.488 CXX test/cpp_headers/blobfs.o 00:03:45.488 LINK cmb_copy 00:03:45.488 LINK spdk_nvme_identify 00:03:45.746 CXX test/cpp_headers/notify.o 00:03:45.746 CXX test/cpp_headers/pipe.o 00:03:45.746 LINK spdk_top 00:03:46.009 CC test/event/event_perf/event_perf.o 00:03:46.009 CXX test/cpp_headers/accel.o 00:03:46.009 CC test/env/mem_callbacks/mem_callbacks.o 00:03:46.009 CC test/dma/test_dma/test_dma.o 00:03:46.009 CXX test/cpp_headers/file.o 00:03:46.009 LINK event_perf 00:03:46.268 CXX test/cpp_headers/version.o 00:03:46.268 CXX test/cpp_headers/trace_parser.o 00:03:46.268 CC test/lvol/esnap/esnap.o 00:03:46.268 CC app/vhost/vhost.o 00:03:46.268 LINK mem_callbacks 00:03:46.268 CXX test/cpp_headers/opal_spec.o 00:03:46.268 LINK test_dma 00:03:46.526 CXX test/cpp_headers/uuid.o 00:03:46.526 CXX test/cpp_headers/likely.o 00:03:46.526 LINK vhost 00:03:46.526 CC test/event/reactor/reactor.o 00:03:46.526 CC test/event/reactor_perf/reactor_perf.o 00:03:46.782 CC app/spdk_dd/spdk_dd.o 00:03:46.782 CXX test/cpp_headers/dif.o 00:03:46.782 LINK reactor 00:03:46.782 LINK reactor_perf 00:03:46.782 CC examples/nvme/abort/abort.o 00:03:46.782 CC app/fio/nvme/fio_plugin.o 00:03:46.782 CC test/env/vtophys/vtophys.o 00:03:46.782 CXX test/cpp_headers/memory.o 00:03:47.039 LINK vtophys 00:03:47.039 CXX test/cpp_headers/vfio_user_pci.o 00:03:47.039 LINK spdk_dd 00:03:47.297 LINK abort 00:03:47.297 CXX test/cpp_headers/dma.o 00:03:47.297 CXX test/cpp_headers/nbd.o 00:03:47.554 CXX test/cpp_headers/conf.o 00:03:47.554 CXX test/cpp_headers/env_dpdk.o 00:03:47.554 LINK spdk_nvme 00:03:47.554 CXX test/cpp_headers/nvmf_spec.o 00:03:47.554 CC test/event/app_repeat/app_repeat.o 00:03:47.812 CC test/event/scheduler/scheduler.o 00:03:47.812 LINK app_repeat 00:03:47.812 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:47.812 CXX test/cpp_headers/iscsi_spec.o 00:03:48.071 LINK env_dpdk_post_init 00:03:48.071 LINK scheduler 00:03:48.071 CXX test/cpp_headers/mmio.o 00:03:48.329 CXX test/cpp_headers/json.o 00:03:48.587 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:48.587 CXX test/cpp_headers/opal.o 00:03:48.587 CC app/fio/bdev/fio_plugin.o 00:03:48.846 CXX test/cpp_headers/bdev.o 00:03:48.846 LINK pmr_persistence 00:03:48.846 CXX test/cpp_headers/base64.o 00:03:49.105 CC test/nvme/aer/aer.o 00:03:49.105 CC test/env/memory/memory_ut.o 00:03:49.105 CXX test/cpp_headers/blobfs_bdev.o 00:03:49.105 CC test/nvme/reset/reset.o 00:03:49.427 LINK spdk_bdev 00:03:49.427 CXX test/cpp_headers/nvme_ocssd.o 00:03:49.427 LINK aer 00:03:49.685 LINK reset 00:03:49.685 CXX test/cpp_headers/fd.o 00:03:49.944 CXX test/cpp_headers/barrier.o 00:03:49.944 LINK memory_ut 00:03:50.202 CXX test/cpp_headers/scsi_spec.o 00:03:50.202 CC test/nvme/sgl/sgl.o 00:03:50.202 CXX test/cpp_headers/zipf.o 00:03:50.202 CC test/nvme/e2edp/nvme_dp.o 00:03:50.461 CXX test/cpp_headers/nvmf.o 00:03:50.461 CC test/env/pci/pci_ut.o 00:03:50.461 CC test/nvme/overhead/overhead.o 00:03:50.461 CXX test/cpp_headers/queue.o 00:03:50.461 LINK sgl 00:03:50.722 CC test/rpc_client/rpc_client_test.o 00:03:50.722 CXX test/cpp_headers/xor.o 00:03:50.722 LINK nvme_dp 00:03:50.981 LINK rpc_client_test 00:03:50.981 LINK overhead 00:03:50.981 CXX test/cpp_headers/cpuset.o 00:03:50.981 CC test/thread/poller_perf/poller_perf.o 00:03:50.981 LINK pci_ut 00:03:50.981 CC test/nvme/err_injection/err_injection.o 00:03:51.240 LINK poller_perf 00:03:51.240 CXX test/cpp_headers/thread.o 00:03:51.240 LINK err_injection 00:03:51.240 CXX test/cpp_headers/bdev_zone.o 00:03:51.499 CXX test/cpp_headers/fd_group.o 00:03:51.499 CC test/thread/lock/spdk_lock.o 00:03:51.756 CC test/nvme/startup/startup.o 00:03:51.756 CXX test/cpp_headers/tree.o 00:03:51.756 CXX test/cpp_headers/blob_bdev.o 00:03:51.756 CC test/nvme/reserve/reserve.o 00:03:51.756 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:51.756 CXX test/cpp_headers/crc64.o 00:03:51.756 LINK startup 00:03:52.015 CXX test/cpp_headers/assert.o 00:03:52.015 CXX test/cpp_headers/nvme_spec.o 00:03:52.015 LINK reserve 00:03:52.015 LINK histogram_ut 00:03:52.015 CXX test/cpp_headers/endian.o 00:03:52.015 LINK esnap 00:03:52.015 CC test/nvme/simple_copy/simple_copy.o 00:03:52.274 CXX test/cpp_headers/pci_ids.o 00:03:52.274 CXX test/cpp_headers/log.o 00:03:52.274 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:52.274 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:52.274 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:52.274 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:52.274 LINK simple_copy 00:03:52.531 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:52.531 CXX test/cpp_headers/ftl.o 00:03:52.531 CXX test/cpp_headers/config.o 00:03:52.531 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:52.790 CXX test/cpp_headers/vhost.o 00:03:52.790 LINK scsi_nvme_ut 00:03:52.790 CXX test/cpp_headers/bdev_module.o 00:03:52.790 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:53.048 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:53.048 LINK gpt_ut 00:03:53.048 CXX test/cpp_headers/nvme_intel.o 00:03:53.048 CC test/nvme/connect_stress/connect_stress.o 00:03:53.305 CXX test/cpp_headers/idxd_spec.o 00:03:53.305 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:53.305 LINK connect_stress 00:03:53.305 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:53.305 LINK spdk_lock 00:03:53.305 CXX test/cpp_headers/crc16.o 00:03:53.568 CXX test/cpp_headers/nvme.o 00:03:53.828 LINK bdev_zone_ut 00:03:53.828 CXX test/cpp_headers/stdinc.o 00:03:53.828 CXX test/cpp_headers/scsi.o 00:03:54.087 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:54.087 LINK vbdev_lvol_ut 00:03:54.087 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:54.087 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:54.345 CXX test/cpp_headers/idxd.o 00:03:54.345 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:54.345 CC test/nvme/boot_partition/boot_partition.o 00:03:54.603 CXX test/cpp_headers/hexlify.o 00:03:54.603 LINK boot_partition 00:03:54.603 LINK tree_ut 00:03:54.603 LINK blob_bdev_ut 00:03:54.603 CXX test/cpp_headers/reduce.o 00:03:54.603 LINK accel_ut 00:03:54.862 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:54.862 CXX test/cpp_headers/crc32.o 00:03:54.862 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:54.862 LINK vbdev_zone_block_ut 00:03:55.120 CXX test/cpp_headers/init.o 00:03:55.120 CXX test/cpp_headers/nvmf_transport.o 00:03:55.120 CXX test/cpp_headers/nvme_zns.o 00:03:55.379 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:55.379 CC test/unit/lib/event/app.c/app_ut.o 00:03:55.379 CXX test/cpp_headers/vfio_user_spec.o 00:03:55.379 LINK bdev_raid_ut 00:03:55.379 CC test/nvme/compliance/nvme_compliance.o 00:03:55.639 CXX test/cpp_headers/util.o 00:03:55.639 LINK dma_ut 00:03:55.639 CXX test/cpp_headers/jsonrpc.o 00:03:55.639 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:55.897 CXX test/cpp_headers/env.o 00:03:55.897 LINK nvme_compliance 00:03:55.897 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:55.897 LINK app_ut 00:03:56.156 LINK part_ut 00:03:56.156 LINK blobfs_async_ut 00:03:56.156 CXX test/cpp_headers/nvmf_cmd.o 00:03:56.156 LINK bdev_raid_sb_ut 00:03:56.412 CXX test/cpp_headers/lvol.o 00:03:56.412 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:56.412 CC test/nvme/fused_ordering/fused_ordering.o 00:03:56.412 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:56.412 CXX test/cpp_headers/histogram_data.o 00:03:56.412 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:56.412 LINK concat_ut 00:03:56.671 CXX test/cpp_headers/event.o 00:03:56.671 LINK fused_ordering 00:03:56.934 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:56.935 CXX test/cpp_headers/trace.o 00:03:56.935 LINK ioat_ut 00:03:56.935 CXX test/cpp_headers/ioat_spec.o 00:03:56.935 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:57.196 LINK bdev_ut 00:03:57.196 CXX test/cpp_headers/string.o 00:03:57.196 CC test/nvme/fdp/fdp.o 00:03:57.196 LINK doorbell_aers 00:03:57.196 LINK reactor_ut 00:03:57.196 LINK raid1_ut 00:03:57.454 CXX test/cpp_headers/ublk.o 00:03:57.454 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:57.454 CXX test/cpp_headers/bit_array.o 00:03:57.454 LINK fdp 00:03:57.713 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:57.713 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:57.713 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:57.713 CXX test/cpp_headers/scheduler.o 00:03:57.713 LINK blobfs_sync_ut 00:03:57.713 LINK bdev_ut 00:03:57.971 CXX test/cpp_headers/blob.o 00:03:57.971 CXX test/cpp_headers/gpt_spec.o 00:03:57.971 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:58.230 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:58.230 CXX test/cpp_headers/sock.o 00:03:58.230 LINK json_util_ut 00:03:58.230 LINK blobfs_bdev_ut 00:03:58.488 CXX test/cpp_headers/vmd.o 00:03:58.488 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:58.488 CC test/unit/lib/log/log.c/log_ut.o 00:03:58.488 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:58.488 CXX test/cpp_headers/rpc.o 00:03:58.488 CC test/nvme/cuse/cuse.o 00:03:58.747 LINK conn_ut 00:03:58.747 LINK raid5f_ut 00:03:58.747 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:58.747 LINK log_ut 00:03:58.747 LINK jsonrpc_server_ut 00:03:58.747 LINK json_write_ut 00:03:59.311 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:59.311 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:59.311 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:59.311 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:59.311 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:59.569 LINK notify_ut 00:03:59.569 LINK cuse 00:03:59.569 LINK init_grp_ut 00:03:59.826 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:59.826 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:59.826 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:04:00.392 LINK json_parse_ut 00:04:00.392 LINK param_ut 00:04:00.392 LINK nvme_ut 00:04:00.649 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:04:00.649 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:04:00.649 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:04:00.649 LINK lvol_ut 00:04:01.214 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:04:01.214 LINK nvme_ctrlr_cmd_ut 00:04:01.214 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:04:01.473 LINK nvme_ns_ut 00:04:01.473 LINK nvme_ctrlr_ocssd_cmd_ut 00:04:01.473 LINK iscsi_ut 00:04:01.731 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:04:01.731 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:04:01.994 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:04:02.256 LINK blob_ut 00:04:02.256 LINK dev_ut 00:04:02.514 LINK portal_grp_ut 00:04:02.514 LINK lun_ut 00:04:02.514 LINK nvme_ns_ocssd_cmd_ut 00:04:02.514 LINK nvme_ns_cmd_ut 00:04:02.514 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:04:02.771 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:04:02.771 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:04:02.771 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:04:02.771 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:04:02.771 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:04:02.771 LINK nvme_ctrlr_ut 00:04:03.029 LINK bdev_nvme_ut 00:04:03.287 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:04:03.287 LINK scsi_ut 00:04:03.287 LINK tcp_ut 00:04:03.287 LINK nvme_quirks_ut 00:04:03.287 LINK nvme_pcie_ut 00:04:03.545 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:04:03.545 LINK tgt_node_ut 00:04:03.545 CC test/unit/lib/thread/thread.c/thread_ut.o 00:04:03.545 CC test/unit/lib/sock/sock.c/sock_ut.o 00:04:03.545 LINK nvme_poll_group_ut 00:04:03.804 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:04:03.804 CC test/unit/lib/util/base64.c/base64_ut.o 00:04:03.804 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:04:03.804 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:04:04.062 LINK base64_ut 00:04:04.062 LINK nvme_qpair_ut 00:04:04.320 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:04:04.320 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:04:04.320 LINK scsi_bdev_ut 00:04:04.586 LINK iobuf_ut 00:04:04.586 LINK bit_array_ut 00:04:04.586 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:04:05.154 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:04:05.154 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:04:05.154 LINK scsi_pr_ut 00:04:05.154 LINK sock_ut 00:04:05.154 LINK cpuset_ut 00:04:05.412 CC test/unit/lib/sock/posix.c/posix_ut.o 00:04:05.412 LINK nvme_tcp_ut 00:04:05.412 LINK ctrlr_bdev_ut 00:04:05.412 LINK pci_event_ut 00:04:05.412 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:04:05.412 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:04:05.670 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:04:05.670 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:04:05.670 LINK crc16_ut 00:04:05.670 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:04:05.670 LINK thread_ut 00:04:05.930 LINK ctrlr_discovery_ut 00:04:05.930 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:04:05.930 LINK subsystem_ut 00:04:05.930 LINK subsystem_ut 00:04:06.189 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:04:06.189 LINK crc32_ieee_ut 00:04:06.189 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:04:06.189 LINK ctrlr_ut 00:04:06.189 LINK crc32c_ut 00:04:06.189 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:04:06.447 LINK posix_ut 00:04:06.448 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:04:06.448 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:04:06.448 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:04:06.448 LINK nvme_transport_ut 00:04:06.448 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:04:06.706 LINK crc64_ut 00:04:06.706 CC test/unit/lib/rdma/common.c/common_ut.o 00:04:06.706 LINK rpc_ut 00:04:06.706 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:04:06.706 LINK nvmf_ut 00:04:06.706 CC test/unit/lib/util/dif.c/dif_ut.o 00:04:06.965 LINK nvme_io_msg_ut 00:04:06.965 LINK idxd_user_ut 00:04:06.965 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:04:06.965 LINK ftl_l2p_ut 00:04:07.222 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:04:07.222 LINK common_ut 00:04:07.222 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:04:07.223 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:04:07.223 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:04:07.223 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:04:07.481 LINK ftl_bitmap_ut 00:04:07.738 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:04:07.738 LINK ftl_mempool_ut 00:04:07.738 LINK ftl_io_ut 00:04:07.738 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:04:08.000 LINK dif_ut 00:04:08.000 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:04:08.000 LINK idxd_ut 00:04:08.274 CC test/unit/lib/util/iov.c/iov_ut.o 00:04:08.274 LINK ftl_mngt_ut 00:04:08.274 LINK ftl_band_ut 00:04:08.274 CC test/unit/lib/util/math.c/math_ut.o 00:04:08.533 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:04:08.533 LINK nvme_pcie_common_ut 00:04:08.533 LINK vhost_ut 00:04:08.533 LINK iov_ut 00:04:08.533 LINK math_ut 00:04:08.533 CC test/unit/lib/util/string.c/string_ut.o 00:04:08.533 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:04:08.791 CC test/unit/lib/util/xor.c/xor_ut.o 00:04:08.791 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:04:08.791 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:04:08.791 LINK string_ut 00:04:08.791 LINK pipe_ut 00:04:09.050 LINK xor_ut 00:04:09.050 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:04:09.309 LINK ftl_sb_ut 00:04:09.309 LINK rdma_ut 00:04:09.309 LINK transport_ut 00:04:09.309 LINK ftl_layout_upgrade_ut 00:04:09.309 LINK nvme_fabric_ut 00:04:09.567 LINK nvme_opal_ut 00:04:10.503 LINK nvme_cuse_ut 00:04:10.762 LINK nvme_rdma_ut 00:04:10.762 ************************************ 00:04:10.762 END TEST unittest_build 00:04:10.762 ************************************ 00:04:10.762 00:04:10.762 real 1m40.685s 00:04:10.762 user 8m18.421s 00:04:10.762 sys 1m35.484s 00:04:10.762 21:00:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:10.762 21:00:33 -- common/autotest_common.sh@10 -- $ set +x 00:04:11.020 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:04:11.020 21:00:33 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:11.020 21:00:33 -- nvmf/common.sh@7 -- # uname -s 00:04:11.020 21:00:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:11.020 21:00:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:11.020 21:00:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:11.020 21:00:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:11.020 21:00:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:11.020 21:00:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:11.020 21:00:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:11.020 21:00:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:11.020 21:00:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:11.020 21:00:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:11.020 21:00:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ee69ef5e-9fc4-424f-b729-5715bb8e805b 00:04:11.020 21:00:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=ee69ef5e-9fc4-424f-b729-5715bb8e805b 00:04:11.020 21:00:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:11.020 21:00:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:11.020 21:00:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:11.020 21:00:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:11.020 21:00:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:11.020 21:00:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:11.020 21:00:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:11.020 21:00:33 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:11.020 21:00:33 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:11.020 21:00:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:11.020 21:00:33 -- paths/export.sh@5 -- # export PATH 00:04:11.020 21:00:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:11.020 21:00:33 -- nvmf/common.sh@46 -- # : 0 00:04:11.020 21:00:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:11.020 21:00:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:11.020 21:00:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:11.020 21:00:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:11.020 21:00:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:11.020 21:00:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:11.020 21:00:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:11.020 21:00:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:11.020 21:00:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:11.020 21:00:33 -- spdk/autotest.sh@32 -- # uname -s 00:04:11.020 21:00:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:11.020 21:00:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:04:11.020 21:00:33 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:11.020 21:00:33 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:11.020 21:00:33 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:11.020 21:00:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:11.588 21:00:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:11.588 21:00:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:04:11.588 21:00:34 -- spdk/autotest.sh@48 -- # udevadm_pid=105913 00:04:11.588 21:00:34 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:11.588 21:00:34 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:04:11.588 21:00:34 -- spdk/autotest.sh@54 -- # echo 105984 00:04:11.588 21:00:34 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:11.588 21:00:34 -- spdk/autotest.sh@56 -- # echo 106008 00:04:11.588 21:00:34 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:11.588 21:00:34 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:11.588 21:00:34 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:11.588 21:00:34 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:11.588 21:00:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:11.588 21:00:34 -- common/autotest_common.sh@10 -- # set +x 00:04:11.588 21:00:34 -- spdk/autotest.sh@70 -- # create_test_list 00:04:11.588 21:00:34 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:11.588 21:00:34 -- common/autotest_common.sh@10 -- # set +x 00:04:11.588 21:00:34 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:11.588 21:00:34 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:11.588 21:00:34 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:11.588 21:00:34 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:11.588 21:00:34 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:11.588 21:00:34 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:11.588 21:00:34 -- common/autotest_common.sh@1440 -- # uname 00:04:11.588 21:00:34 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:11.588 21:00:34 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:11.588 21:00:34 -- common/autotest_common.sh@1460 -- # uname 00:04:11.588 21:00:34 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:11.588 21:00:34 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:11.588 21:00:34 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:11.588 21:00:34 -- spdk/autotest.sh@83 -- # hash lcov 00:04:11.588 21:00:34 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:11.588 21:00:34 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:11.588 --rc lcov_branch_coverage=1 00:04:11.588 --rc lcov_function_coverage=1 00:04:11.588 --rc genhtml_branch_coverage=1 00:04:11.588 --rc genhtml_function_coverage=1 00:04:11.588 --rc genhtml_legend=1 00:04:11.588 --rc geninfo_all_blocks=1 00:04:11.588 ' 00:04:11.588 21:00:34 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:11.588 --rc lcov_branch_coverage=1 00:04:11.588 --rc lcov_function_coverage=1 00:04:11.588 --rc genhtml_branch_coverage=1 00:04:11.588 --rc genhtml_function_coverage=1 00:04:11.588 --rc genhtml_legend=1 00:04:11.588 --rc geninfo_all_blocks=1 00:04:11.588 ' 00:04:11.588 21:00:34 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:11.588 --rc lcov_branch_coverage=1 00:04:11.588 --rc lcov_function_coverage=1 00:04:11.588 --rc genhtml_branch_coverage=1 00:04:11.588 --rc genhtml_function_coverage=1 00:04:11.588 --rc genhtml_legend=1 00:04:11.588 --rc geninfo_all_blocks=1 00:04:11.588 --no-external' 00:04:11.588 21:00:34 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:11.588 --rc lcov_branch_coverage=1 00:04:11.588 --rc lcov_function_coverage=1 00:04:11.588 --rc genhtml_branch_coverage=1 00:04:11.588 --rc genhtml_function_coverage=1 00:04:11.588 --rc genhtml_legend=1 00:04:11.588 --rc geninfo_all_blocks=1 00:04:11.588 --no-external' 00:04:11.588 21:00:34 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:11.847 lcov: LCOV version 1.15 00:04:11.847 21:00:34 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:13.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:13.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:13.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:13.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:13.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:13.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:13.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:13.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:13.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:13.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:13.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:13.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:13.483 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:13.483 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:13.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:13.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:00.416 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:00.416 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:00.416 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:00.416 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:00.416 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:00.416 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:00.416 21:01:22 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:05:00.416 21:01:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:00.416 21:01:22 -- common/autotest_common.sh@10 -- # set +x 00:05:00.416 21:01:22 -- spdk/autotest.sh@102 -- # rm -f 00:05:00.416 21:01:22 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:00.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:00.416 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:00.416 21:01:22 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:05:00.416 21:01:22 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:00.416 21:01:22 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:00.416 21:01:22 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:00.416 21:01:22 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:00.416 21:01:22 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:00.416 21:01:22 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:00.416 21:01:22 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.416 21:01:22 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:00.416 21:01:22 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:05:00.416 21:01:22 -- spdk/autotest.sh@121 -- # grep -v p 00:05:00.416 21:01:22 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:05:00.416 21:01:22 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:00.416 21:01:22 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:00.416 21:01:22 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:05:00.416 21:01:22 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:00.416 21:01:22 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:00.416 No valid GPT data, bailing 00:05:00.416 21:01:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:00.416 21:01:22 -- scripts/common.sh@393 -- # pt= 00:05:00.416 21:01:22 -- scripts/common.sh@394 -- # return 1 00:05:00.416 21:01:22 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:00.416 1+0 records in 00:05:00.416 1+0 records out 00:05:00.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252874 s, 41.5 MB/s 00:05:00.416 21:01:22 -- spdk/autotest.sh@129 -- # sync 00:05:00.416 21:01:22 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:00.416 21:01:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:00.416 21:01:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:01.351 21:01:24 -- spdk/autotest.sh@135 -- # uname -s 00:05:01.610 21:01:24 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:05:01.610 21:01:24 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:01.610 21:01:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:01.610 21:01:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:01.610 21:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:01.610 ************************************ 00:05:01.610 START TEST setup.sh 00:05:01.610 ************************************ 00:05:01.610 21:01:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:01.610 * Looking for test storage... 00:05:01.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:01.610 21:01:24 -- setup/test-setup.sh@10 -- # uname -s 00:05:01.610 21:01:24 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:01.610 21:01:24 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:01.610 21:01:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:01.610 21:01:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:01.610 21:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:01.610 ************************************ 00:05:01.610 START TEST acl 00:05:01.610 ************************************ 00:05:01.610 21:01:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:01.610 * Looking for test storage... 00:05:01.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:01.610 21:01:24 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:01.610 21:01:24 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:01.610 21:01:24 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:01.610 21:01:24 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:01.611 21:01:24 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:01.611 21:01:24 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:01.611 21:01:24 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:01.611 21:01:24 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:01.611 21:01:24 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:01.611 21:01:24 -- setup/acl.sh@12 -- # devs=() 00:05:01.611 21:01:24 -- setup/acl.sh@12 -- # declare -a devs 00:05:01.611 21:01:24 -- setup/acl.sh@13 -- # drivers=() 00:05:01.611 21:01:24 -- setup/acl.sh@13 -- # declare -A drivers 00:05:01.611 21:01:24 -- setup/acl.sh@51 -- # setup reset 00:05:01.611 21:01:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:01.611 21:01:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:02.178 21:01:24 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:02.178 21:01:24 -- setup/acl.sh@16 -- # local dev driver 00:05:02.178 21:01:24 -- setup/acl.sh@15 -- # setup output status 00:05:02.178 21:01:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:02.178 21:01:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.178 21:01:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:02.178 Hugepages 00:05:02.178 node hugesize free / total 00:05:02.178 21:01:24 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:02.178 21:01:24 -- setup/acl.sh@19 -- # continue 00:05:02.178 21:01:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:02.178 00:05:02.178 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:02.178 21:01:24 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:02.178 21:01:24 -- setup/acl.sh@19 -- # continue 00:05:02.178 21:01:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:02.437 21:01:24 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:02.437 21:01:24 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:02.437 21:01:24 -- setup/acl.sh@20 -- # continue 00:05:02.437 21:01:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:02.437 21:01:24 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:02.437 21:01:24 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:02.437 21:01:24 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:02.437 21:01:24 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:02.437 21:01:24 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:02.437 21:01:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:02.437 21:01:24 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:02.437 21:01:24 -- setup/acl.sh@54 -- # run_test denied denied 00:05:02.437 21:01:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:02.437 21:01:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.437 21:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:02.437 ************************************ 00:05:02.437 START TEST denied 00:05:02.437 ************************************ 00:05:02.437 21:01:24 -- common/autotest_common.sh@1104 -- # denied 00:05:02.437 21:01:24 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:02.437 21:01:24 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:02.437 21:01:24 -- setup/acl.sh@38 -- # setup output config 00:05:02.437 21:01:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.437 21:01:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:04.354 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:04.354 21:01:26 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:04.354 21:01:26 -- setup/acl.sh@28 -- # local dev driver 00:05:04.354 21:01:26 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:04.354 21:01:26 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:04.354 21:01:26 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:04.354 21:01:26 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:04.354 21:01:26 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:04.354 21:01:26 -- setup/acl.sh@41 -- # setup reset 00:05:04.354 21:01:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:04.354 21:01:26 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.612 ************************************ 00:05:04.612 END TEST denied 00:05:04.612 ************************************ 00:05:04.612 00:05:04.612 real 0m2.255s 00:05:04.612 user 0m0.515s 00:05:04.612 sys 0m1.786s 00:05:04.612 21:01:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.612 21:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:04.612 21:01:27 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:04.612 21:01:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:04.612 21:01:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.612 21:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:04.613 ************************************ 00:05:04.613 START TEST allowed 00:05:04.613 ************************************ 00:05:04.613 21:01:27 -- common/autotest_common.sh@1104 -- # allowed 00:05:04.613 21:01:27 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:04.613 21:01:27 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:04.613 21:01:27 -- setup/acl.sh@45 -- # setup output config 00:05:04.613 21:01:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.613 21:01:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:06.516 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.517 21:01:28 -- setup/acl.sh@47 -- # verify 00:05:06.517 21:01:28 -- setup/acl.sh@28 -- # local dev driver 00:05:06.517 21:01:28 -- setup/acl.sh@48 -- # setup reset 00:05:06.517 21:01:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:06.517 21:01:28 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.776 ************************************ 00:05:06.776 END TEST allowed 00:05:06.776 ************************************ 00:05:06.776 00:05:06.776 real 0m1.981s 00:05:06.776 user 0m0.475s 00:05:06.776 sys 0m1.469s 00:05:06.776 21:01:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.776 21:01:29 -- common/autotest_common.sh@10 -- # set +x 00:05:06.776 ************************************ 00:05:06.776 END TEST acl 00:05:06.776 ************************************ 00:05:06.776 00:05:06.776 real 0m5.163s 00:05:06.776 user 0m1.554s 00:05:06.776 sys 0m3.658s 00:05:06.776 21:01:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.776 21:01:29 -- common/autotest_common.sh@10 -- # set +x 00:05:06.776 21:01:29 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:06.776 21:01:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.776 21:01:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.776 21:01:29 -- common/autotest_common.sh@10 -- # set +x 00:05:06.776 ************************************ 00:05:06.776 START TEST hugepages 00:05:06.776 ************************************ 00:05:06.776 21:01:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:06.776 * Looking for test storage... 00:05:06.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:06.776 21:01:29 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:06.776 21:01:29 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:06.776 21:01:29 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:06.776 21:01:29 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:06.776 21:01:29 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:06.776 21:01:29 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:06.776 21:01:29 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:06.776 21:01:29 -- setup/common.sh@18 -- # local node= 00:05:06.776 21:01:29 -- setup/common.sh@19 -- # local var val 00:05:06.776 21:01:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.776 21:01:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.776 21:01:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.776 21:01:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.776 21:01:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.776 21:01:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.776 21:01:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 1038104 kB' 'MemAvailable: 7408740 kB' 'Buffers: 42336 kB' 'Cached: 6379168 kB' 'SwapCached: 0 kB' 'Active: 2115684 kB' 'Inactive: 4429712 kB' 'Active(anon): 133040 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982644 kB' 'Inactive(file): 4427920 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 584 kB' 'Writeback: 0 kB' 'AnonPages: 142008 kB' 'Mapped: 73052 kB' 'Shmem: 2616 kB' 'KReclaimable: 281936 kB' 'Slab: 376664 kB' 'SReclaimable: 281936 kB' 'SUnreclaim: 94728 kB' 'KernelStack: 4592 kB' 'PageTables: 3596 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4028400 kB' 'Committed_AS: 618700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14340 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.776 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.776 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:06.777 21:01:29 -- setup/common.sh@32 -- # continue 00:05:06.777 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.036 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.036 21:01:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.036 21:01:29 -- setup/common.sh@32 -- # continue 00:05:07.036 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.036 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.036 21:01:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.036 21:01:29 -- setup/common.sh@32 -- # continue 00:05:07.036 21:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.036 21:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.036 21:01:29 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:07.036 21:01:29 -- setup/common.sh@33 -- # echo 2048 00:05:07.036 21:01:29 -- setup/common.sh@33 -- # return 0 00:05:07.036 21:01:29 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:07.036 21:01:29 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:07.036 21:01:29 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:07.036 21:01:29 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:07.036 21:01:29 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:07.036 21:01:29 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:07.037 21:01:29 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:07.037 21:01:29 -- setup/hugepages.sh@207 -- # get_nodes 00:05:07.037 21:01:29 -- setup/hugepages.sh@27 -- # local node 00:05:07.037 21:01:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.037 21:01:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:07.037 21:01:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:07.037 21:01:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.037 21:01:29 -- setup/hugepages.sh@208 -- # clear_hp 00:05:07.037 21:01:29 -- setup/hugepages.sh@37 -- # local node hp 00:05:07.037 21:01:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:07.037 21:01:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.037 21:01:29 -- setup/hugepages.sh@41 -- # echo 0 00:05:07.037 21:01:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.037 21:01:29 -- setup/hugepages.sh@41 -- # echo 0 00:05:07.037 21:01:29 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:07.037 21:01:29 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:07.037 21:01:29 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:07.037 21:01:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.037 21:01:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.037 21:01:29 -- common/autotest_common.sh@10 -- # set +x 00:05:07.037 ************************************ 00:05:07.037 START TEST default_setup 00:05:07.037 ************************************ 00:05:07.037 21:01:29 -- common/autotest_common.sh@1104 -- # default_setup 00:05:07.037 21:01:29 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:07.037 21:01:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.037 21:01:29 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:07.037 21:01:29 -- setup/hugepages.sh@51 -- # shift 00:05:07.037 21:01:29 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:05:07.037 21:01:29 -- setup/hugepages.sh@52 -- # local node_ids 00:05:07.037 21:01:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.037 21:01:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.037 21:01:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:07.037 21:01:29 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:07.037 21:01:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.037 21:01:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.037 21:01:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:07.037 21:01:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.037 21:01:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.037 21:01:29 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:07.037 21:01:29 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:07.037 21:01:29 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:07.037 21:01:29 -- setup/hugepages.sh@73 -- # return 0 00:05:07.037 21:01:29 -- setup/hugepages.sh@137 -- # setup output 00:05:07.037 21:01:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.037 21:01:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:07.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:07.295 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.235 21:01:30 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:08.235 21:01:30 -- setup/hugepages.sh@89 -- # local node 00:05:08.235 21:01:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.235 21:01:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.235 21:01:30 -- setup/hugepages.sh@92 -- # local surp 00:05:08.235 21:01:30 -- setup/hugepages.sh@93 -- # local resv 00:05:08.235 21:01:30 -- setup/hugepages.sh@94 -- # local anon 00:05:08.235 21:01:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.235 21:01:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.235 21:01:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.235 21:01:30 -- setup/common.sh@18 -- # local node= 00:05:08.235 21:01:30 -- setup/common.sh@19 -- # local var val 00:05:08.235 21:01:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.235 21:01:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.235 21:01:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.235 21:01:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.235 21:01:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.235 21:01:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.235 21:01:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3132536 kB' 'MemAvailable: 9503148 kB' 'Buffers: 42336 kB' 'Cached: 6379092 kB' 'SwapCached: 0 kB' 'Active: 2121648 kB' 'Inactive: 4429692 kB' 'Active(anon): 138940 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1982708 kB' 'Inactive(file): 4427904 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 148116 kB' 'Mapped: 73004 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376128 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94264 kB' 'KernelStack: 4512 kB' 'PageTables: 3548 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 635132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.235 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.235 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.236 21:01:30 -- setup/common.sh@33 -- # echo 0 00:05:08.236 21:01:30 -- setup/common.sh@33 -- # return 0 00:05:08.236 21:01:30 -- setup/hugepages.sh@97 -- # anon=0 00:05:08.236 21:01:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.236 21:01:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.236 21:01:30 -- setup/common.sh@18 -- # local node= 00:05:08.236 21:01:30 -- setup/common.sh@19 -- # local var val 00:05:08.236 21:01:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.236 21:01:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.236 21:01:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.236 21:01:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.236 21:01:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.236 21:01:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3132796 kB' 'MemAvailable: 9503408 kB' 'Buffers: 42336 kB' 'Cached: 6379092 kB' 'SwapCached: 0 kB' 'Active: 2121440 kB' 'Inactive: 4429696 kB' 'Active(anon): 138732 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982708 kB' 'Inactive(file): 4427904 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 148144 kB' 'Mapped: 73004 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376128 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94264 kB' 'KernelStack: 4464 kB' 'PageTables: 3476 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 635132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.236 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.236 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.237 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.237 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.238 21:01:30 -- setup/common.sh@33 -- # echo 0 00:05:08.238 21:01:30 -- setup/common.sh@33 -- # return 0 00:05:08.238 21:01:30 -- setup/hugepages.sh@99 -- # surp=0 00:05:08.238 21:01:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.238 21:01:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.238 21:01:30 -- setup/common.sh@18 -- # local node= 00:05:08.238 21:01:30 -- setup/common.sh@19 -- # local var val 00:05:08.238 21:01:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.238 21:01:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.238 21:01:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.238 21:01:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.238 21:01:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.238 21:01:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3132772 kB' 'MemAvailable: 9503384 kB' 'Buffers: 42336 kB' 'Cached: 6379092 kB' 'SwapCached: 0 kB' 'Active: 2121440 kB' 'Inactive: 4429696 kB' 'Active(anon): 138732 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982708 kB' 'Inactive(file): 4427904 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 148016 kB' 'Mapped: 73004 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376128 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94264 kB' 'KernelStack: 4464 kB' 'PageTables: 3476 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 640040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.238 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.238 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.239 21:01:30 -- setup/common.sh@33 -- # echo 0 00:05:08.239 21:01:30 -- setup/common.sh@33 -- # return 0 00:05:08.239 nr_hugepages=1024 00:05:08.239 resv_hugepages=0 00:05:08.239 surplus_hugepages=0 00:05:08.239 anon_hugepages=0 00:05:08.239 21:01:30 -- setup/hugepages.sh@100 -- # resv=0 00:05:08.239 21:01:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:08.239 21:01:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.239 21:01:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.239 21:01:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.239 21:01:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.239 21:01:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:08.239 21:01:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.239 21:01:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.239 21:01:30 -- setup/common.sh@18 -- # local node= 00:05:08.239 21:01:30 -- setup/common.sh@19 -- # local var val 00:05:08.239 21:01:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.239 21:01:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.239 21:01:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.239 21:01:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.239 21:01:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.239 21:01:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3133032 kB' 'MemAvailable: 9503644 kB' 'Buffers: 42336 kB' 'Cached: 6379092 kB' 'SwapCached: 0 kB' 'Active: 2121700 kB' 'Inactive: 4429696 kB' 'Active(anon): 138992 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982708 kB' 'Inactive(file): 4427904 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 148276 kB' 'Mapped: 73004 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376128 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94264 kB' 'KernelStack: 4532 kB' 'PageTables: 3476 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 640604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.239 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.239 21:01:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.240 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.240 21:01:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.240 21:01:30 -- setup/common.sh@33 -- # echo 1024 00:05:08.240 21:01:30 -- setup/common.sh@33 -- # return 0 00:05:08.240 21:01:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.240 21:01:30 -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.240 21:01:30 -- setup/hugepages.sh@27 -- # local node 00:05:08.240 21:01:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.240 21:01:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:08.240 21:01:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:08.240 21:01:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.240 21:01:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.240 21:01:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.240 21:01:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.240 21:01:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.240 21:01:30 -- setup/common.sh@18 -- # local node=0 00:05:08.240 21:01:30 -- setup/common.sh@19 -- # local var val 00:05:08.240 21:01:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.240 21:01:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.241 21:01:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.241 21:01:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.241 21:01:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.241 21:01:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3132756 kB' 'MemUsed: 9118348 kB' 'Active: 2121960 kB' 'Inactive: 4429696 kB' 'Active(anon): 139252 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982708 kB' 'Inactive(file): 4427904 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'FilePages: 6421428 kB' 'Mapped: 73004 kB' 'AnonPages: 148436 kB' 'Shmem: 2616 kB' 'KernelStack: 4560 kB' 'PageTables: 3620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281864 kB' 'Slab: 376128 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # continue 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.241 21:01:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.241 21:01:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.241 21:01:30 -- setup/common.sh@33 -- # echo 0 00:05:08.241 21:01:30 -- setup/common.sh@33 -- # return 0 00:05:08.241 21:01:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.241 21:01:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.241 21:01:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.241 21:01:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.241 21:01:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:08.241 node0=1024 expecting 1024 00:05:08.241 21:01:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:08.241 ************************************ 00:05:08.241 END TEST default_setup 00:05:08.241 ************************************ 00:05:08.241 00:05:08.241 real 0m1.358s 00:05:08.241 user 0m0.312s 00:05:08.241 sys 0m1.000s 00:05:08.242 21:01:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.242 21:01:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.242 21:01:30 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:08.242 21:01:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:08.242 21:01:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.242 21:01:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.242 ************************************ 00:05:08.242 START TEST per_node_1G_alloc 00:05:08.242 ************************************ 00:05:08.242 21:01:30 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:08.242 21:01:30 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:08.242 21:01:30 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:08.242 21:01:30 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:08.242 21:01:30 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:08.242 21:01:30 -- setup/hugepages.sh@51 -- # shift 00:05:08.242 21:01:30 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:05:08.242 21:01:30 -- setup/hugepages.sh@52 -- # local node_ids 00:05:08.242 21:01:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:08.242 21:01:30 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:08.242 21:01:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:08.242 21:01:30 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:08.242 21:01:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:08.242 21:01:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:08.242 21:01:30 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:08.242 21:01:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:08.242 21:01:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:08.242 21:01:30 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:08.242 21:01:30 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:08.242 21:01:30 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:08.242 21:01:30 -- setup/hugepages.sh@73 -- # return 0 00:05:08.242 21:01:30 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:08.242 21:01:30 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:08.242 21:01:30 -- setup/hugepages.sh@146 -- # setup output 00:05:08.242 21:01:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.242 21:01:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.501 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:08.759 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:09.025 21:01:31 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:09.025 21:01:31 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:09.025 21:01:31 -- setup/hugepages.sh@89 -- # local node 00:05:09.025 21:01:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.025 21:01:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.025 21:01:31 -- setup/hugepages.sh@92 -- # local surp 00:05:09.025 21:01:31 -- setup/hugepages.sh@93 -- # local resv 00:05:09.025 21:01:31 -- setup/hugepages.sh@94 -- # local anon 00:05:09.025 21:01:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.025 21:01:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.025 21:01:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.025 21:01:31 -- setup/common.sh@18 -- # local node= 00:05:09.025 21:01:31 -- setup/common.sh@19 -- # local var val 00:05:09.025 21:01:31 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.025 21:01:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.025 21:01:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.025 21:01:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.025 21:01:31 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.025 21:01:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.025 21:01:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4180192 kB' 'MemAvailable: 10550808 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2121496 kB' 'Inactive: 4429700 kB' 'Active(anon): 138788 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982708 kB' 'Inactive(file): 4427908 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 148144 kB' 'Mapped: 73004 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376204 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94340 kB' 'KernelStack: 4568 kB' 'PageTables: 3420 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 636968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.025 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.025 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.026 21:01:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.026 21:01:31 -- setup/common.sh@33 -- # echo 0 00:05:09.026 21:01:31 -- setup/common.sh@33 -- # return 0 00:05:09.026 21:01:31 -- setup/hugepages.sh@97 -- # anon=0 00:05:09.026 21:01:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.026 21:01:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.026 21:01:31 -- setup/common.sh@18 -- # local node= 00:05:09.026 21:01:31 -- setup/common.sh@19 -- # local var val 00:05:09.026 21:01:31 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.026 21:01:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.026 21:01:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.026 21:01:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.026 21:01:31 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.026 21:01:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.026 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4180192 kB' 'MemAvailable: 10550808 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2121756 kB' 'Inactive: 4429700 kB' 'Active(anon): 139048 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982708 kB' 'Inactive(file): 4427908 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 148404 kB' 'Mapped: 73004 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376204 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94340 kB' 'KernelStack: 4568 kB' 'PageTables: 3420 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 642340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.027 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.027 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.028 21:01:31 -- setup/common.sh@33 -- # echo 0 00:05:09.028 21:01:31 -- setup/common.sh@33 -- # return 0 00:05:09.028 21:01:31 -- setup/hugepages.sh@99 -- # surp=0 00:05:09.028 21:01:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.028 21:01:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.028 21:01:31 -- setup/common.sh@18 -- # local node= 00:05:09.028 21:01:31 -- setup/common.sh@19 -- # local var val 00:05:09.028 21:01:31 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.028 21:01:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.028 21:01:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.028 21:01:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.028 21:01:31 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.028 21:01:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4180452 kB' 'MemAvailable: 10551068 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2121756 kB' 'Inactive: 4429700 kB' 'Active(anon): 139048 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982708 kB' 'Inactive(file): 4427908 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 148276 kB' 'Mapped: 73004 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376204 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94340 kB' 'KernelStack: 4568 kB' 'PageTables: 3420 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 636992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.028 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.028 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.029 21:01:31 -- setup/common.sh@33 -- # echo 0 00:05:09.029 21:01:31 -- setup/common.sh@33 -- # return 0 00:05:09.029 21:01:31 -- setup/hugepages.sh@100 -- # resv=0 00:05:09.029 nr_hugepages=512 00:05:09.029 resv_hugepages=0 00:05:09.029 21:01:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:09.029 21:01:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.029 surplus_hugepages=0 00:05:09.029 21:01:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.029 anon_hugepages=0 00:05:09.029 21:01:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.029 21:01:31 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:09.029 21:01:31 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:09.029 21:01:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.029 21:01:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.029 21:01:31 -- setup/common.sh@18 -- # local node= 00:05:09.029 21:01:31 -- setup/common.sh@19 -- # local var val 00:05:09.029 21:01:31 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.029 21:01:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.029 21:01:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.029 21:01:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.029 21:01:31 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.029 21:01:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4180664 kB' 'MemAvailable: 10551280 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2121992 kB' 'Inactive: 4429700 kB' 'Active(anon): 139284 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982708 kB' 'Inactive(file): 4427908 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 148512 kB' 'Mapped: 73004 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376204 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94340 kB' 'KernelStack: 4620 kB' 'PageTables: 3396 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 629816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.029 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.029 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.030 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.030 21:01:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.031 21:01:31 -- setup/common.sh@33 -- # echo 512 00:05:09.031 21:01:31 -- setup/common.sh@33 -- # return 0 00:05:09.031 21:01:31 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:09.031 21:01:31 -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.031 21:01:31 -- setup/hugepages.sh@27 -- # local node 00:05:09.031 21:01:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.031 21:01:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:09.031 21:01:31 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:09.031 21:01:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.031 21:01:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.031 21:01:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.031 21:01:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.031 21:01:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.031 21:01:31 -- setup/common.sh@18 -- # local node=0 00:05:09.031 21:01:31 -- setup/common.sh@19 -- # local var val 00:05:09.031 21:01:31 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.031 21:01:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.031 21:01:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.031 21:01:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.031 21:01:31 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.031 21:01:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4181404 kB' 'MemUsed: 8069700 kB' 'Active: 2121908 kB' 'Inactive: 4429700 kB' 'Active(anon): 139200 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982708 kB' 'Inactive(file): 4427908 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'FilePages: 6421432 kB' 'Mapped: 72944 kB' 'AnonPages: 148416 kB' 'Shmem: 2616 kB' 'KernelStack: 4664 kB' 'PageTables: 3560 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281864 kB' 'Slab: 376316 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.031 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.031 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # continue 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.032 21:01:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.032 21:01:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.032 21:01:31 -- setup/common.sh@33 -- # echo 0 00:05:09.032 21:01:31 -- setup/common.sh@33 -- # return 0 00:05:09.032 21:01:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.032 21:01:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.032 21:01:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.032 node0=512 expecting 512 00:05:09.032 21:01:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.032 21:01:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:09.032 21:01:31 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:09.032 00:05:09.032 real 0m0.757s 00:05:09.032 user 0m0.237s 00:05:09.032 sys 0m0.554s 00:05:09.032 ************************************ 00:05:09.032 21:01:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.032 21:01:31 -- common/autotest_common.sh@10 -- # set +x 00:05:09.032 END TEST per_node_1G_alloc 00:05:09.032 ************************************ 00:05:09.032 21:01:31 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:09.032 21:01:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.032 21:01:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.032 21:01:31 -- common/autotest_common.sh@10 -- # set +x 00:05:09.350 ************************************ 00:05:09.350 START TEST even_2G_alloc 00:05:09.350 ************************************ 00:05:09.350 21:01:31 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:09.350 21:01:31 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:09.350 21:01:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:09.350 21:01:31 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:09.350 21:01:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.350 21:01:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:09.350 21:01:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:09.351 21:01:31 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:09.351 21:01:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.351 21:01:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:09.351 21:01:31 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:09.351 21:01:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.351 21:01:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.351 21:01:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:09.351 21:01:31 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:09.351 21:01:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:09.351 21:01:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:09.351 21:01:31 -- setup/hugepages.sh@83 -- # : 0 00:05:09.351 21:01:31 -- setup/hugepages.sh@84 -- # : 0 00:05:09.351 21:01:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:09.351 21:01:31 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:09.351 21:01:31 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:09.351 21:01:31 -- setup/hugepages.sh@153 -- # setup output 00:05:09.351 21:01:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.351 21:01:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:09.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:09.351 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:09.920 21:01:32 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:09.920 21:01:32 -- setup/hugepages.sh@89 -- # local node 00:05:09.920 21:01:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.920 21:01:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.920 21:01:32 -- setup/hugepages.sh@92 -- # local surp 00:05:09.920 21:01:32 -- setup/hugepages.sh@93 -- # local resv 00:05:09.920 21:01:32 -- setup/hugepages.sh@94 -- # local anon 00:05:09.920 21:01:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.920 21:01:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.920 21:01:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.920 21:01:32 -- setup/common.sh@18 -- # local node= 00:05:09.920 21:01:32 -- setup/common.sh@19 -- # local var val 00:05:09.920 21:01:32 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.920 21:01:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.920 21:01:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.920 21:01:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.920 21:01:32 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.920 21:01:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.920 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.920 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3132140 kB' 'MemAvailable: 9502756 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2121568 kB' 'Inactive: 4429672 kB' 'Active(anon): 138832 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982736 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 148124 kB' 'Mapped: 72892 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376152 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94288 kB' 'KernelStack: 4568 kB' 'PageTables: 3436 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 633692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14292 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.921 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.921 21:01:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.922 21:01:32 -- setup/common.sh@33 -- # echo 0 00:05:09.922 21:01:32 -- setup/common.sh@33 -- # return 0 00:05:09.922 21:01:32 -- setup/hugepages.sh@97 -- # anon=0 00:05:09.922 21:01:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.922 21:01:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.922 21:01:32 -- setup/common.sh@18 -- # local node= 00:05:09.922 21:01:32 -- setup/common.sh@19 -- # local var val 00:05:09.922 21:01:32 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.922 21:01:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.922 21:01:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.922 21:01:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.922 21:01:32 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.922 21:01:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3132140 kB' 'MemAvailable: 9502756 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2121828 kB' 'Inactive: 4429672 kB' 'Active(anon): 139092 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982736 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 148384 kB' 'Mapped: 72892 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376152 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94288 kB' 'KernelStack: 4568 kB' 'PageTables: 3436 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 638928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.922 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.922 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.923 21:01:32 -- setup/common.sh@33 -- # echo 0 00:05:09.923 21:01:32 -- setup/common.sh@33 -- # return 0 00:05:09.923 21:01:32 -- setup/hugepages.sh@99 -- # surp=0 00:05:09.923 21:01:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.923 21:01:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.923 21:01:32 -- setup/common.sh@18 -- # local node= 00:05:09.923 21:01:32 -- setup/common.sh@19 -- # local var val 00:05:09.923 21:01:32 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.923 21:01:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.923 21:01:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.923 21:01:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.923 21:01:32 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.923 21:01:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3132140 kB' 'MemAvailable: 9502756 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2122088 kB' 'Inactive: 4429672 kB' 'Active(anon): 139352 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982736 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 148256 kB' 'Mapped: 72892 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376152 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94288 kB' 'KernelStack: 4568 kB' 'PageTables: 3436 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 633712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.923 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.923 21:01:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # continue 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.924 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.924 21:01:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.924 21:01:32 -- setup/common.sh@33 -- # echo 0 00:05:09.924 21:01:32 -- setup/common.sh@33 -- # return 0 00:05:09.924 21:01:32 -- setup/hugepages.sh@100 -- # resv=0 00:05:09.924 nr_hugepages=1024 00:05:09.924 resv_hugepages=0 00:05:09.924 21:01:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:09.924 21:01:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.924 surplus_hugepages=0 00:05:09.924 21:01:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.924 anon_hugepages=0 00:05:09.924 21:01:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.924 21:01:32 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.924 21:01:32 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:10.184 21:01:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.184 21:01:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.184 21:01:32 -- setup/common.sh@18 -- # local node= 00:05:10.184 21:01:32 -- setup/common.sh@19 -- # local var val 00:05:10.184 21:01:32 -- setup/common.sh@20 -- # local mem_f mem 00:05:10.184 21:01:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.184 21:01:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.184 21:01:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.184 21:01:32 -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.184 21:01:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.184 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 21:01:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3132644 kB' 'MemAvailable: 9503260 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2121664 kB' 'Inactive: 4429672 kB' 'Active(anon): 138928 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982736 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 148064 kB' 'Mapped: 72948 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376152 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94288 kB' 'KernelStack: 4612 kB' 'PageTables: 3580 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 638552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14340 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:10.184 21:01:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.184 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.184 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 21:01:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.184 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.184 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 21:01:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.184 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.184 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 21:01:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.184 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.184 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 21:01:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.184 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.184 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 21:01:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.184 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.184 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.184 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.184 21:01:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.184 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.185 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.185 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.186 21:01:32 -- setup/common.sh@33 -- # echo 1024 00:05:10.186 21:01:32 -- setup/common.sh@33 -- # return 0 00:05:10.186 21:01:32 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.186 21:01:32 -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.186 21:01:32 -- setup/hugepages.sh@27 -- # local node 00:05:10.186 21:01:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.186 21:01:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.186 21:01:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:10.186 21:01:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.186 21:01:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.186 21:01:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.186 21:01:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.186 21:01:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.186 21:01:32 -- setup/common.sh@18 -- # local node=0 00:05:10.186 21:01:32 -- setup/common.sh@19 -- # local var val 00:05:10.186 21:01:32 -- setup/common.sh@20 -- # local mem_f mem 00:05:10.186 21:01:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.186 21:01:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.186 21:01:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.186 21:01:32 -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.186 21:01:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3132644 kB' 'MemUsed: 9118460 kB' 'Active: 2121664 kB' 'Inactive: 4429672 kB' 'Active(anon): 138928 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982736 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'FilePages: 6421432 kB' 'Mapped: 72948 kB' 'AnonPages: 148844 kB' 'Shmem: 2616 kB' 'KernelStack: 4612 kB' 'PageTables: 3580 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281864 kB' 'Slab: 376152 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.186 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.186 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # continue 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.187 21:01:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.187 21:01:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.187 21:01:32 -- setup/common.sh@33 -- # echo 0 00:05:10.187 21:01:32 -- setup/common.sh@33 -- # return 0 00:05:10.187 21:01:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.187 21:01:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.187 21:01:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.187 21:01:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.187 node0=1024 expecting 1024 00:05:10.187 21:01:32 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:10.187 21:01:32 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:10.187 00:05:10.187 real 0m0.916s 00:05:10.187 user 0m0.226s 00:05:10.187 sys 0m0.724s 00:05:10.187 21:01:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.187 21:01:32 -- common/autotest_common.sh@10 -- # set +x 00:05:10.187 ************************************ 00:05:10.187 END TEST even_2G_alloc 00:05:10.187 ************************************ 00:05:10.187 21:01:32 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:10.187 21:01:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:10.187 21:01:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.187 21:01:32 -- common/autotest_common.sh@10 -- # set +x 00:05:10.187 ************************************ 00:05:10.187 START TEST odd_alloc 00:05:10.187 ************************************ 00:05:10.187 21:01:32 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:10.187 21:01:32 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:10.187 21:01:32 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:10.187 21:01:32 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:10.187 21:01:32 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:10.187 21:01:32 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:10.187 21:01:32 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:10.187 21:01:32 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:10.187 21:01:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.187 21:01:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:10.187 21:01:32 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:10.187 21:01:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.187 21:01:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.187 21:01:32 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:10.187 21:01:32 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:10.187 21:01:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:10.187 21:01:32 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:10.187 21:01:32 -- setup/hugepages.sh@83 -- # : 0 00:05:10.187 21:01:32 -- setup/hugepages.sh@84 -- # : 0 00:05:10.187 21:01:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:10.187 21:01:32 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:10.187 21:01:32 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:10.187 21:01:32 -- setup/hugepages.sh@160 -- # setup output 00:05:10.187 21:01:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.187 21:01:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:10.445 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.014 21:01:33 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:11.014 21:01:33 -- setup/hugepages.sh@89 -- # local node 00:05:11.014 21:01:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.014 21:01:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.014 21:01:33 -- setup/hugepages.sh@92 -- # local surp 00:05:11.014 21:01:33 -- setup/hugepages.sh@93 -- # local resv 00:05:11.014 21:01:33 -- setup/hugepages.sh@94 -- # local anon 00:05:11.014 21:01:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.014 21:01:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.014 21:01:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.014 21:01:33 -- setup/common.sh@18 -- # local node= 00:05:11.014 21:01:33 -- setup/common.sh@19 -- # local var val 00:05:11.014 21:01:33 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.014 21:01:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.014 21:01:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.014 21:01:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.014 21:01:33 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.014 21:01:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3130800 kB' 'MemAvailable: 9501416 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2122064 kB' 'Inactive: 4429672 kB' 'Active(anon): 139328 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982736 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 148368 kB' 'Mapped: 73268 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376160 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94296 kB' 'KernelStack: 4676 kB' 'PageTables: 3712 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 633108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.014 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.014 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.015 21:01:33 -- setup/common.sh@33 -- # echo 0 00:05:11.015 21:01:33 -- setup/common.sh@33 -- # return 0 00:05:11.015 21:01:33 -- setup/hugepages.sh@97 -- # anon=0 00:05:11.015 21:01:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.015 21:01:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.015 21:01:33 -- setup/common.sh@18 -- # local node= 00:05:11.015 21:01:33 -- setup/common.sh@19 -- # local var val 00:05:11.015 21:01:33 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.015 21:01:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.015 21:01:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.015 21:01:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.015 21:01:33 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.015 21:01:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3130800 kB' 'MemAvailable: 9501416 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2122324 kB' 'Inactive: 4429672 kB' 'Active(anon): 139588 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982736 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 148628 kB' 'Mapped: 73268 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376160 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94296 kB' 'KernelStack: 4676 kB' 'PageTables: 3712 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 638480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.015 21:01:33 -- setup/common.sh@33 -- # echo 0 00:05:11.015 21:01:33 -- setup/common.sh@33 -- # return 0 00:05:11.015 21:01:33 -- setup/hugepages.sh@99 -- # surp=0 00:05:11.015 21:01:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.015 21:01:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.015 21:01:33 -- setup/common.sh@18 -- # local node= 00:05:11.015 21:01:33 -- setup/common.sh@19 -- # local var val 00:05:11.015 21:01:33 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.015 21:01:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.015 21:01:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.015 21:01:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.015 21:01:33 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.015 21:01:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3130808 kB' 'MemAvailable: 9501424 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2121888 kB' 'Inactive: 4429672 kB' 'Active(anon): 139152 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982736 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 149000 kB' 'Mapped: 73220 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376160 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94296 kB' 'KernelStack: 4644 kB' 'PageTables: 3668 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 638480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.015 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.015 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.016 21:01:33 -- setup/common.sh@33 -- # echo 0 00:05:11.016 21:01:33 -- setup/common.sh@33 -- # return 0 00:05:11.016 21:01:33 -- setup/hugepages.sh@100 -- # resv=0 00:05:11.016 nr_hugepages=1025 00:05:11.016 resv_hugepages=0 00:05:11.016 surplus_hugepages=0 00:05:11.016 21:01:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:11.016 21:01:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.016 21:01:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.016 anon_hugepages=0 00:05:11.016 21:01:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.016 21:01:33 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:11.016 21:01:33 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:11.016 21:01:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.016 21:01:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.016 21:01:33 -- setup/common.sh@18 -- # local node= 00:05:11.016 21:01:33 -- setup/common.sh@19 -- # local var val 00:05:11.016 21:01:33 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.016 21:01:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.016 21:01:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.016 21:01:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.016 21:01:33 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.016 21:01:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3131172 kB' 'MemAvailable: 9501788 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2121912 kB' 'Inactive: 4429672 kB' 'Active(anon): 139176 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982736 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 148884 kB' 'Mapped: 73124 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376096 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94232 kB' 'KernelStack: 4600 kB' 'PageTables: 3488 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 636412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.016 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.016 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.017 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.017 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.018 21:01:33 -- setup/common.sh@33 -- # echo 1025 00:05:11.018 21:01:33 -- setup/common.sh@33 -- # return 0 00:05:11.018 21:01:33 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:11.018 21:01:33 -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.018 21:01:33 -- setup/hugepages.sh@27 -- # local node 00:05:11.018 21:01:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.018 21:01:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:11.018 21:01:33 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:11.018 21:01:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.018 21:01:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.018 21:01:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.018 21:01:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.018 21:01:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.018 21:01:33 -- setup/common.sh@18 -- # local node=0 00:05:11.018 21:01:33 -- setup/common.sh@19 -- # local var val 00:05:11.018 21:01:33 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.018 21:01:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.018 21:01:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.018 21:01:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.018 21:01:33 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.018 21:01:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3131172 kB' 'MemUsed: 9119932 kB' 'Active: 2122172 kB' 'Inactive: 4429672 kB' 'Active(anon): 139436 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982736 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'FilePages: 6421432 kB' 'Mapped: 73124 kB' 'AnonPages: 148756 kB' 'Shmem: 2616 kB' 'KernelStack: 4600 kB' 'PageTables: 3488 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281864 kB' 'Slab: 376096 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # continue 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.018 21:01:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.018 21:01:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.018 21:01:33 -- setup/common.sh@33 -- # echo 0 00:05:11.018 21:01:33 -- setup/common.sh@33 -- # return 0 00:05:11.018 21:01:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.018 node0=1025 expecting 1025 00:05:11.018 21:01:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.018 21:01:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.018 21:01:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.018 21:01:33 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:11.018 21:01:33 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:11.018 00:05:11.018 real 0m0.889s 00:05:11.018 user 0m0.221s 00:05:11.018 sys 0m0.700s 00:05:11.018 21:01:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.018 21:01:33 -- common/autotest_common.sh@10 -- # set +x 00:05:11.018 ************************************ 00:05:11.018 END TEST odd_alloc 00:05:11.018 ************************************ 00:05:11.018 21:01:33 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:11.018 21:01:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:11.018 21:01:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.018 21:01:33 -- common/autotest_common.sh@10 -- # set +x 00:05:11.018 ************************************ 00:05:11.018 START TEST custom_alloc 00:05:11.018 ************************************ 00:05:11.018 21:01:33 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:11.018 21:01:33 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:11.018 21:01:33 -- setup/hugepages.sh@169 -- # local node 00:05:11.018 21:01:33 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:11.018 21:01:33 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:11.018 21:01:33 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:11.018 21:01:33 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:11.018 21:01:33 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:11.018 21:01:33 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:11.018 21:01:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.018 21:01:33 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:11.018 21:01:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:11.018 21:01:33 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:11.018 21:01:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.018 21:01:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:11.018 21:01:33 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:11.018 21:01:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.018 21:01:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.018 21:01:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:11.018 21:01:33 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:11.018 21:01:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.018 21:01:33 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:11.018 21:01:33 -- setup/hugepages.sh@83 -- # : 0 00:05:11.018 21:01:33 -- setup/hugepages.sh@84 -- # : 0 00:05:11.018 21:01:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.018 21:01:33 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:11.018 21:01:33 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:11.018 21:01:33 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:11.018 21:01:33 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:11.018 21:01:33 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:11.018 21:01:33 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:11.018 21:01:33 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:11.018 21:01:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.018 21:01:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:11.018 21:01:33 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:11.018 21:01:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.018 21:01:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.018 21:01:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:11.018 21:01:33 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:11.018 21:01:33 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:11.018 21:01:33 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:11.018 21:01:33 -- setup/hugepages.sh@78 -- # return 0 00:05:11.018 21:01:33 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:11.018 21:01:33 -- setup/hugepages.sh@187 -- # setup output 00:05:11.018 21:01:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.018 21:01:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:11.276 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.848 21:01:34 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:11.848 21:01:34 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:11.848 21:01:34 -- setup/hugepages.sh@89 -- # local node 00:05:11.848 21:01:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.848 21:01:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.848 21:01:34 -- setup/hugepages.sh@92 -- # local surp 00:05:11.848 21:01:34 -- setup/hugepages.sh@93 -- # local resv 00:05:11.848 21:01:34 -- setup/hugepages.sh@94 -- # local anon 00:05:11.848 21:01:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.848 21:01:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.848 21:01:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.848 21:01:34 -- setup/common.sh@18 -- # local node= 00:05:11.848 21:01:34 -- setup/common.sh@19 -- # local var val 00:05:11.848 21:01:34 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.848 21:01:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.848 21:01:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.848 21:01:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.848 21:01:34 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.848 21:01:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4194560 kB' 'MemAvailable: 10565176 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2109084 kB' 'Inactive: 4429668 kB' 'Active(anon): 126344 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982740 kB' 'Inactive(file): 4427876 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 136040 kB' 'Mapped: 72532 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376200 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94336 kB' 'KernelStack: 4456 kB' 'PageTables: 3064 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 607712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14068 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.848 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.848 21:01:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.849 21:01:34 -- setup/common.sh@33 -- # echo 0 00:05:11.849 21:01:34 -- setup/common.sh@33 -- # return 0 00:05:11.849 21:01:34 -- setup/hugepages.sh@97 -- # anon=0 00:05:11.849 21:01:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.849 21:01:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.849 21:01:34 -- setup/common.sh@18 -- # local node= 00:05:11.849 21:01:34 -- setup/common.sh@19 -- # local var val 00:05:11.849 21:01:34 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.849 21:01:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.849 21:01:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.849 21:01:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.849 21:01:34 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.849 21:01:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4194560 kB' 'MemAvailable: 10565176 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2109344 kB' 'Inactive: 4429668 kB' 'Active(anon): 126604 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982740 kB' 'Inactive(file): 4427876 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 136040 kB' 'Mapped: 72532 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376200 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94336 kB' 'KernelStack: 4456 kB' 'PageTables: 3064 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 607712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.849 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.849 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.850 21:01:34 -- setup/common.sh@33 -- # echo 0 00:05:11.850 21:01:34 -- setup/common.sh@33 -- # return 0 00:05:11.850 21:01:34 -- setup/hugepages.sh@99 -- # surp=0 00:05:11.850 21:01:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.850 21:01:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.850 21:01:34 -- setup/common.sh@18 -- # local node= 00:05:11.850 21:01:34 -- setup/common.sh@19 -- # local var val 00:05:11.850 21:01:34 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.850 21:01:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.850 21:01:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.850 21:01:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.850 21:01:34 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.850 21:01:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4194308 kB' 'MemAvailable: 10564924 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2109292 kB' 'Inactive: 4429668 kB' 'Active(anon): 126552 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982740 kB' 'Inactive(file): 4427876 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 135468 kB' 'Mapped: 72532 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376200 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94336 kB' 'KernelStack: 4424 kB' 'PageTables: 3004 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 607712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14116 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.850 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.850 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.851 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.851 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.852 21:01:34 -- setup/common.sh@33 -- # echo 0 00:05:11.852 21:01:34 -- setup/common.sh@33 -- # return 0 00:05:11.852 21:01:34 -- setup/hugepages.sh@100 -- # resv=0 00:05:11.852 nr_hugepages=512 00:05:11.852 resv_hugepages=0 00:05:11.852 21:01:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:11.852 21:01:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.852 surplus_hugepages=0 00:05:11.852 21:01:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.852 anon_hugepages=0 00:05:11.852 21:01:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.852 21:01:34 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:11.852 21:01:34 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:11.852 21:01:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.852 21:01:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.852 21:01:34 -- setup/common.sh@18 -- # local node= 00:05:11.852 21:01:34 -- setup/common.sh@19 -- # local var val 00:05:11.852 21:01:34 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.852 21:01:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.852 21:01:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.852 21:01:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.852 21:01:34 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.852 21:01:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4194444 kB' 'MemAvailable: 10565060 kB' 'Buffers: 42336 kB' 'Cached: 6379096 kB' 'SwapCached: 0 kB' 'Active: 2109516 kB' 'Inactive: 4429668 kB' 'Active(anon): 126776 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982740 kB' 'Inactive(file): 4427876 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 135800 kB' 'Mapped: 72580 kB' 'Shmem: 2616 kB' 'KReclaimable: 281864 kB' 'Slab: 376056 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94192 kB' 'KernelStack: 4444 kB' 'PageTables: 2916 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 617444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14132 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.852 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.852 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.853 21:01:34 -- setup/common.sh@33 -- # echo 512 00:05:11.853 21:01:34 -- setup/common.sh@33 -- # return 0 00:05:11.853 21:01:34 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:11.853 21:01:34 -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.853 21:01:34 -- setup/hugepages.sh@27 -- # local node 00:05:11.853 21:01:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.853 21:01:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:11.853 21:01:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:11.853 21:01:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.853 21:01:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.853 21:01:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.853 21:01:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.853 21:01:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.853 21:01:34 -- setup/common.sh@18 -- # local node=0 00:05:11.853 21:01:34 -- setup/common.sh@19 -- # local var val 00:05:11.853 21:01:34 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.853 21:01:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.853 21:01:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.853 21:01:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.853 21:01:34 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.853 21:01:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 4194160 kB' 'MemUsed: 8056944 kB' 'Active: 2109296 kB' 'Inactive: 4429668 kB' 'Active(anon): 126556 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982740 kB' 'Inactive(file): 4427876 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'FilePages: 6421432 kB' 'Mapped: 72532 kB' 'AnonPages: 135668 kB' 'Shmem: 2616 kB' 'KernelStack: 4396 kB' 'PageTables: 2836 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281864 kB' 'Slab: 376048 kB' 'SReclaimable: 281864 kB' 'SUnreclaim: 94184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.853 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.853 21:01:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # continue 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.854 21:01:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.854 21:01:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.854 21:01:34 -- setup/common.sh@33 -- # echo 0 00:05:11.854 21:01:34 -- setup/common.sh@33 -- # return 0 00:05:11.854 21:01:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.854 21:01:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.854 21:01:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.854 node0=512 expecting 512 00:05:11.854 21:01:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.854 21:01:34 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:11.854 21:01:34 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:11.854 00:05:11.854 real 0m0.770s 00:05:11.854 user 0m0.215s 00:05:11.854 sys 0m0.588s 00:05:11.854 ************************************ 00:05:11.854 END TEST custom_alloc 00:05:11.854 21:01:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.854 21:01:34 -- common/autotest_common.sh@10 -- # set +x 00:05:11.854 ************************************ 00:05:11.854 21:01:34 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:11.854 21:01:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:11.854 21:01:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.854 21:01:34 -- common/autotest_common.sh@10 -- # set +x 00:05:11.854 ************************************ 00:05:11.854 START TEST no_shrink_alloc 00:05:11.854 ************************************ 00:05:11.854 21:01:34 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:11.854 21:01:34 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:11.854 21:01:34 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:11.854 21:01:34 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:11.854 21:01:34 -- setup/hugepages.sh@51 -- # shift 00:05:11.854 21:01:34 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:05:11.854 21:01:34 -- setup/hugepages.sh@52 -- # local node_ids 00:05:11.854 21:01:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.854 21:01:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:11.854 21:01:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:11.854 21:01:34 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:11.854 21:01:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.854 21:01:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:11.854 21:01:34 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:11.854 21:01:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.854 21:01:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.854 21:01:34 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:11.854 21:01:34 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:11.854 21:01:34 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:11.854 21:01:34 -- setup/hugepages.sh@73 -- # return 0 00:05:11.854 21:01:34 -- setup/hugepages.sh@198 -- # setup output 00:05:11.854 21:01:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.854 21:01:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.113 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:12.113 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.053 21:01:35 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:13.053 21:01:35 -- setup/hugepages.sh@89 -- # local node 00:05:13.053 21:01:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.053 21:01:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.053 21:01:35 -- setup/hugepages.sh@92 -- # local surp 00:05:13.053 21:01:35 -- setup/hugepages.sh@93 -- # local resv 00:05:13.053 21:01:35 -- setup/hugepages.sh@94 -- # local anon 00:05:13.053 21:01:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.053 21:01:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.053 21:01:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.053 21:01:35 -- setup/common.sh@18 -- # local node= 00:05:13.053 21:01:35 -- setup/common.sh@19 -- # local var val 00:05:13.053 21:01:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.053 21:01:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.053 21:01:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.053 21:01:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.053 21:01:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.053 21:01:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.053 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 21:01:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3146516 kB' 'MemAvailable: 9517156 kB' 'Buffers: 42344 kB' 'Cached: 6379100 kB' 'SwapCached: 0 kB' 'Active: 2108684 kB' 'Inactive: 4429672 kB' 'Active(anon): 125940 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982744 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 135496 kB' 'Mapped: 72560 kB' 'Shmem: 2616 kB' 'KReclaimable: 281880 kB' 'Slab: 375796 kB' 'SReclaimable: 281880 kB' 'SUnreclaim: 93916 kB' 'KernelStack: 4368 kB' 'PageTables: 3144 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 591804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14068 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:13.053 21:01:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.053 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.053 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 21:01:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.053 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.053 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 21:01:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.053 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.053 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 21:01:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.053 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.054 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.055 21:01:35 -- setup/common.sh@33 -- # echo 0 00:05:13.055 21:01:35 -- setup/common.sh@33 -- # return 0 00:05:13.055 21:01:35 -- setup/hugepages.sh@97 -- # anon=0 00:05:13.055 21:01:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.055 21:01:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.055 21:01:35 -- setup/common.sh@18 -- # local node= 00:05:13.055 21:01:35 -- setup/common.sh@19 -- # local var val 00:05:13.055 21:01:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.055 21:01:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.055 21:01:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.055 21:01:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.055 21:01:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.055 21:01:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3146516 kB' 'MemAvailable: 9517156 kB' 'Buffers: 42344 kB' 'Cached: 6379100 kB' 'SwapCached: 0 kB' 'Active: 2108860 kB' 'Inactive: 4429672 kB' 'Active(anon): 126116 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982744 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 135548 kB' 'Mapped: 72464 kB' 'Shmem: 2616 kB' 'KReclaimable: 281880 kB' 'Slab: 375796 kB' 'SReclaimable: 281880 kB' 'SUnreclaim: 93916 kB' 'KernelStack: 4320 kB' 'PageTables: 3068 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 597528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14084 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.056 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.057 21:01:35 -- setup/common.sh@33 -- # echo 0 00:05:13.057 21:01:35 -- setup/common.sh@33 -- # return 0 00:05:13.057 21:01:35 -- setup/hugepages.sh@99 -- # surp=0 00:05:13.057 21:01:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.057 21:01:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.057 21:01:35 -- setup/common.sh@18 -- # local node= 00:05:13.057 21:01:35 -- setup/common.sh@19 -- # local var val 00:05:13.057 21:01:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.057 21:01:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.057 21:01:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.057 21:01:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.057 21:01:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.057 21:01:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3146776 kB' 'MemAvailable: 9517416 kB' 'Buffers: 42344 kB' 'Cached: 6379100 kB' 'SwapCached: 0 kB' 'Active: 2109120 kB' 'Inactive: 4429672 kB' 'Active(anon): 126376 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982744 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 135680 kB' 'Mapped: 72464 kB' 'Shmem: 2616 kB' 'KReclaimable: 281880 kB' 'Slab: 375796 kB' 'SReclaimable: 281880 kB' 'SUnreclaim: 93916 kB' 'KernelStack: 4320 kB' 'PageTables: 3068 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 597528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.057 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.057 21:01:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.058 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.058 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.059 21:01:35 -- setup/common.sh@33 -- # echo 0 00:05:13.059 21:01:35 -- setup/common.sh@33 -- # return 0 00:05:13.059 21:01:35 -- setup/hugepages.sh@100 -- # resv=0 00:05:13.059 nr_hugepages=1024 00:05:13.059 resv_hugepages=0 00:05:13.059 surplus_hugepages=0 00:05:13.059 21:01:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:13.059 21:01:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.059 21:01:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.059 anon_hugepages=0 00:05:13.059 21:01:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.059 21:01:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.059 21:01:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:13.059 21:01:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.059 21:01:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.059 21:01:35 -- setup/common.sh@18 -- # local node= 00:05:13.059 21:01:35 -- setup/common.sh@19 -- # local var val 00:05:13.059 21:01:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.059 21:01:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.059 21:01:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.059 21:01:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.059 21:01:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.059 21:01:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3146956 kB' 'MemAvailable: 9517596 kB' 'Buffers: 42344 kB' 'Cached: 6379100 kB' 'SwapCached: 0 kB' 'Active: 2108720 kB' 'Inactive: 4429672 kB' 'Active(anon): 125976 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982744 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 135512 kB' 'Mapped: 72452 kB' 'Shmem: 2616 kB' 'KReclaimable: 281880 kB' 'Slab: 375812 kB' 'SReclaimable: 281880 kB' 'SUnreclaim: 93932 kB' 'KernelStack: 4388 kB' 'PageTables: 3060 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 602368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.059 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.059 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.060 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.060 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.061 21:01:35 -- setup/common.sh@33 -- # echo 1024 00:05:13.061 21:01:35 -- setup/common.sh@33 -- # return 0 00:05:13.061 21:01:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.061 21:01:35 -- setup/hugepages.sh@112 -- # get_nodes 00:05:13.061 21:01:35 -- setup/hugepages.sh@27 -- # local node 00:05:13.061 21:01:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.061 21:01:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:13.061 21:01:35 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:13.061 21:01:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.061 21:01:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.061 21:01:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.061 21:01:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:13.061 21:01:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.061 21:01:35 -- setup/common.sh@18 -- # local node=0 00:05:13.061 21:01:35 -- setup/common.sh@19 -- # local var val 00:05:13.061 21:01:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.061 21:01:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.061 21:01:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:13.061 21:01:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:13.061 21:01:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.061 21:01:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3147216 kB' 'MemUsed: 9103888 kB' 'Active: 2108720 kB' 'Inactive: 4429672 kB' 'Active(anon): 125976 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982744 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'FilePages: 6421444 kB' 'Mapped: 72452 kB' 'AnonPages: 135384 kB' 'Shmem: 2616 kB' 'KernelStack: 4388 kB' 'PageTables: 3060 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281880 kB' 'Slab: 375812 kB' 'SReclaimable: 281880 kB' 'SUnreclaim: 93932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.061 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.061 21:01:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.062 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.062 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.062 21:01:35 -- setup/common.sh@33 -- # echo 0 00:05:13.062 21:01:35 -- setup/common.sh@33 -- # return 0 00:05:13.062 21:01:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.062 21:01:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.062 21:01:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.062 21:01:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.062 node0=1024 expecting 1024 00:05:13.062 21:01:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:13.062 21:01:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:13.062 21:01:35 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:13.062 21:01:35 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:13.062 21:01:35 -- setup/hugepages.sh@202 -- # setup output 00:05:13.062 21:01:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.062 21:01:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.327 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:13.327 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.327 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:13.327 21:01:35 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:13.327 21:01:35 -- setup/hugepages.sh@89 -- # local node 00:05:13.327 21:01:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.327 21:01:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.327 21:01:35 -- setup/hugepages.sh@92 -- # local surp 00:05:13.327 21:01:35 -- setup/hugepages.sh@93 -- # local resv 00:05:13.327 21:01:35 -- setup/hugepages.sh@94 -- # local anon 00:05:13.327 21:01:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.327 21:01:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.327 21:01:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.327 21:01:35 -- setup/common.sh@18 -- # local node= 00:05:13.327 21:01:35 -- setup/common.sh@19 -- # local var val 00:05:13.327 21:01:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.327 21:01:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.327 21:01:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.327 21:01:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.327 21:01:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.327 21:01:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.327 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.327 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.327 21:01:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3145136 kB' 'MemAvailable: 9515776 kB' 'Buffers: 42344 kB' 'Cached: 6379100 kB' 'SwapCached: 0 kB' 'Active: 2109880 kB' 'Inactive: 4429668 kB' 'Active(anon): 127132 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982748 kB' 'Inactive(file): 4427876 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 137048 kB' 'Mapped: 72296 kB' 'Shmem: 2616 kB' 'KReclaimable: 281880 kB' 'Slab: 376088 kB' 'SReclaimable: 281880 kB' 'SUnreclaim: 94208 kB' 'KernelStack: 4496 kB' 'PageTables: 3360 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 602624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:13.327 21:01:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.327 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.327 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.327 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.327 21:01:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.327 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.327 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.327 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.327 21:01:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.327 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.327 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.327 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.328 21:01:35 -- setup/common.sh@33 -- # echo 0 00:05:13.328 21:01:35 -- setup/common.sh@33 -- # return 0 00:05:13.328 21:01:35 -- setup/hugepages.sh@97 -- # anon=0 00:05:13.328 21:01:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.328 21:01:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.328 21:01:35 -- setup/common.sh@18 -- # local node= 00:05:13.328 21:01:35 -- setup/common.sh@19 -- # local var val 00:05:13.328 21:01:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.328 21:01:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.328 21:01:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.328 21:01:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.328 21:01:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.328 21:01:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3145136 kB' 'MemAvailable: 9515776 kB' 'Buffers: 42344 kB' 'Cached: 6379100 kB' 'SwapCached: 0 kB' 'Active: 2109880 kB' 'Inactive: 4429668 kB' 'Active(anon): 127132 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982748 kB' 'Inactive(file): 4427876 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 136532 kB' 'Mapped: 72296 kB' 'Shmem: 2616 kB' 'KReclaimable: 281880 kB' 'Slab: 376088 kB' 'SReclaimable: 281880 kB' 'SUnreclaim: 94208 kB' 'KernelStack: 4496 kB' 'PageTables: 3360 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 602624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.328 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.328 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.329 21:01:35 -- setup/common.sh@33 -- # echo 0 00:05:13.329 21:01:35 -- setup/common.sh@33 -- # return 0 00:05:13.329 21:01:35 -- setup/hugepages.sh@99 -- # surp=0 00:05:13.329 21:01:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.329 21:01:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.329 21:01:35 -- setup/common.sh@18 -- # local node= 00:05:13.329 21:01:35 -- setup/common.sh@19 -- # local var val 00:05:13.329 21:01:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.329 21:01:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.329 21:01:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.329 21:01:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.329 21:01:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.329 21:01:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3145492 kB' 'MemAvailable: 9516132 kB' 'Buffers: 42344 kB' 'Cached: 6379100 kB' 'SwapCached: 0 kB' 'Active: 2109648 kB' 'Inactive: 4429668 kB' 'Active(anon): 126900 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982748 kB' 'Inactive(file): 4427876 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 135532 kB' 'Mapped: 72344 kB' 'Shmem: 2616 kB' 'KReclaimable: 281880 kB' 'Slab: 376284 kB' 'SReclaimable: 281880 kB' 'SUnreclaim: 94404 kB' 'KernelStack: 4404 kB' 'PageTables: 3488 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 602624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14116 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.329 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.329 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.330 21:01:35 -- setup/common.sh@33 -- # echo 0 00:05:13.330 21:01:35 -- setup/common.sh@33 -- # return 0 00:05:13.330 21:01:35 -- setup/hugepages.sh@100 -- # resv=0 00:05:13.330 nr_hugepages=1024 00:05:13.330 resv_hugepages=0 00:05:13.330 21:01:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:13.330 21:01:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.330 surplus_hugepages=0 00:05:13.330 anon_hugepages=0 00:05:13.330 21:01:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.330 21:01:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.330 21:01:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.330 21:01:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:13.330 21:01:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.330 21:01:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.330 21:01:35 -- setup/common.sh@18 -- # local node= 00:05:13.330 21:01:35 -- setup/common.sh@19 -- # local var val 00:05:13.330 21:01:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.330 21:01:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.330 21:01:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.330 21:01:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.330 21:01:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.330 21:01:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3145612 kB' 'MemAvailable: 9516256 kB' 'Buffers: 42344 kB' 'Cached: 6379100 kB' 'SwapCached: 0 kB' 'Active: 2109236 kB' 'Inactive: 4429672 kB' 'Active(anon): 126488 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982748 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 135832 kB' 'Mapped: 72296 kB' 'Shmem: 2616 kB' 'KReclaimable: 281880 kB' 'Slab: 376284 kB' 'SReclaimable: 281880 kB' 'SUnreclaim: 94404 kB' 'KernelStack: 4392 kB' 'PageTables: 3336 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 602980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14132 kB' 'VmallocChunk: 0 kB' 'Percpu: 8784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 4028416 kB' 'DirectMap1G: 10485760 kB' 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.330 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.330 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.331 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.331 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.332 21:01:35 -- setup/common.sh@33 -- # echo 1024 00:05:13.332 21:01:35 -- setup/common.sh@33 -- # return 0 00:05:13.332 21:01:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.332 21:01:35 -- setup/hugepages.sh@112 -- # get_nodes 00:05:13.332 21:01:35 -- setup/hugepages.sh@27 -- # local node 00:05:13.332 21:01:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.332 21:01:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:13.332 21:01:35 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:13.332 21:01:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.332 21:01:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.332 21:01:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.332 21:01:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:13.332 21:01:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.332 21:01:35 -- setup/common.sh@18 -- # local node=0 00:05:13.332 21:01:35 -- setup/common.sh@19 -- # local var val 00:05:13.332 21:01:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.332 21:01:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.332 21:01:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:13.332 21:01:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:13.332 21:01:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.332 21:01:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3145636 kB' 'MemUsed: 9105468 kB' 'Active: 2109160 kB' 'Inactive: 4429672 kB' 'Active(anon): 126412 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1982748 kB' 'Inactive(file): 4427880 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'FilePages: 6421444 kB' 'Mapped: 72296 kB' 'AnonPages: 135600 kB' 'Shmem: 2616 kB' 'KernelStack: 4312 kB' 'PageTables: 3208 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281880 kB' 'Slab: 376284 kB' 'SReclaimable: 281880 kB' 'SUnreclaim: 94404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.332 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.332 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.333 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.333 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.334 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.334 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.334 21:01:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.334 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.334 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.334 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.334 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.334 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.334 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.334 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.334 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.334 21:01:35 -- setup/common.sh@32 -- # continue 00:05:13.334 21:01:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.334 21:01:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.334 21:01:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.334 21:01:35 -- setup/common.sh@33 -- # echo 0 00:05:13.334 21:01:35 -- setup/common.sh@33 -- # return 0 00:05:13.334 21:01:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.334 21:01:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.334 21:01:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.334 node0=1024 expecting 1024 00:05:13.334 21:01:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.334 21:01:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:13.334 21:01:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:13.334 00:05:13.334 real 0m1.425s 00:05:13.334 user 0m0.500s 00:05:13.334 sys 0m0.998s 00:05:13.334 21:01:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.334 21:01:35 -- common/autotest_common.sh@10 -- # set +x 00:05:13.334 ************************************ 00:05:13.334 END TEST no_shrink_alloc 00:05:13.334 ************************************ 00:05:13.334 21:01:35 -- setup/hugepages.sh@217 -- # clear_hp 00:05:13.334 21:01:35 -- setup/hugepages.sh@37 -- # local node hp 00:05:13.334 21:01:35 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:13.334 21:01:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.334 21:01:35 -- setup/hugepages.sh@41 -- # echo 0 00:05:13.334 21:01:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.334 21:01:35 -- setup/hugepages.sh@41 -- # echo 0 00:05:13.334 21:01:35 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:13.334 21:01:35 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:13.334 00:05:13.334 real 0m6.568s 00:05:13.334 user 0m1.923s 00:05:13.334 sys 0m4.769s 00:05:13.334 21:01:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.334 21:01:35 -- common/autotest_common.sh@10 -- # set +x 00:05:13.334 ************************************ 00:05:13.334 END TEST hugepages 00:05:13.334 ************************************ 00:05:13.334 21:01:35 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:13.334 21:01:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.334 21:01:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.334 21:01:35 -- common/autotest_common.sh@10 -- # set +x 00:05:13.334 ************************************ 00:05:13.334 START TEST driver 00:05:13.334 ************************************ 00:05:13.334 21:01:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:13.618 * Looking for test storage... 00:05:13.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:13.618 21:01:36 -- setup/driver.sh@68 -- # setup reset 00:05:13.618 21:01:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.618 21:01:36 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.877 21:01:36 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:13.877 21:01:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.877 21:01:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.877 21:01:36 -- common/autotest_common.sh@10 -- # set +x 00:05:13.877 ************************************ 00:05:13.877 START TEST guess_driver 00:05:13.877 ************************************ 00:05:13.877 21:01:36 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:13.877 21:01:36 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:13.877 21:01:36 -- setup/driver.sh@47 -- # local fail=0 00:05:13.877 21:01:36 -- setup/driver.sh@49 -- # pick_driver 00:05:13.877 21:01:36 -- setup/driver.sh@36 -- # vfio 00:05:13.877 21:01:36 -- setup/driver.sh@21 -- # local iommu_grups 00:05:13.877 21:01:36 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:13.877 21:01:36 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:13.877 21:01:36 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:13.877 21:01:36 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:13.877 21:01:36 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:13.877 21:01:36 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:05:13.877 21:01:36 -- setup/driver.sh@32 -- # return 1 00:05:13.877 21:01:36 -- setup/driver.sh@38 -- # uio 00:05:13.877 21:01:36 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:13.877 21:01:36 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:13.877 21:01:36 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:13.877 21:01:36 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:13.877 21:01:36 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio.ko 00:05:13.877 insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:05:13.877 21:01:36 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:13.877 Looking for driver=uio_pci_generic 00:05:13.877 21:01:36 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:13.877 21:01:36 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:13.877 21:01:36 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:13.877 21:01:36 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.877 21:01:36 -- setup/driver.sh@45 -- # setup output config 00:05:13.877 21:01:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.877 21:01:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.444 21:01:36 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:14.444 21:01:36 -- setup/driver.sh@58 -- # continue 00:05:14.444 21:01:36 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.444 21:01:36 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.444 21:01:36 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:14.444 21:01:36 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.386 21:01:38 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:15.386 21:01:38 -- setup/driver.sh@65 -- # setup reset 00:05:15.386 21:01:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:15.386 21:01:38 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.953 00:05:15.953 real 0m1.998s 00:05:15.953 user 0m0.478s 00:05:15.953 sys 0m1.477s 00:05:15.953 21:01:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.953 21:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:15.953 ************************************ 00:05:15.953 END TEST guess_driver 00:05:15.953 ************************************ 00:05:15.953 ************************************ 00:05:15.953 END TEST driver 00:05:15.953 ************************************ 00:05:15.953 00:05:15.953 real 0m2.552s 00:05:15.953 user 0m0.771s 00:05:15.953 sys 0m1.730s 00:05:15.953 21:01:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.953 21:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:15.953 21:01:38 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:15.953 21:01:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.953 21:01:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.953 21:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:15.953 ************************************ 00:05:15.953 START TEST devices 00:05:15.953 ************************************ 00:05:15.953 21:01:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:15.953 * Looking for test storage... 00:05:16.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:16.211 21:01:38 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:16.211 21:01:38 -- setup/devices.sh@192 -- # setup reset 00:05:16.211 21:01:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.211 21:01:38 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.470 21:01:39 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:16.470 21:01:39 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:16.470 21:01:39 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:16.470 21:01:39 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:16.470 21:01:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:16.470 21:01:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:16.470 21:01:39 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:16.470 21:01:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:16.470 21:01:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:16.470 21:01:39 -- setup/devices.sh@196 -- # blocks=() 00:05:16.470 21:01:39 -- setup/devices.sh@196 -- # declare -a blocks 00:05:16.470 21:01:39 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:16.470 21:01:39 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:16.470 21:01:39 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:16.470 21:01:39 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:16.470 21:01:39 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:16.470 21:01:39 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:16.470 21:01:39 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:16.470 21:01:39 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:16.470 21:01:39 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:16.470 21:01:39 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:16.470 21:01:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:16.470 No valid GPT data, bailing 00:05:16.470 21:01:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:16.470 21:01:39 -- scripts/common.sh@393 -- # pt= 00:05:16.470 21:01:39 -- scripts/common.sh@394 -- # return 1 00:05:16.470 21:01:39 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:16.470 21:01:39 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:16.470 21:01:39 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:16.470 21:01:39 -- setup/common.sh@80 -- # echo 5368709120 00:05:16.470 21:01:39 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:16.470 21:01:39 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:16.470 21:01:39 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:16.470 21:01:39 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:16.470 21:01:39 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:16.470 21:01:39 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:16.470 21:01:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.470 21:01:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.470 21:01:39 -- common/autotest_common.sh@10 -- # set +x 00:05:16.470 ************************************ 00:05:16.470 START TEST nvme_mount 00:05:16.470 ************************************ 00:05:16.730 21:01:39 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:16.730 21:01:39 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:16.730 21:01:39 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:16.730 21:01:39 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.730 21:01:39 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.730 21:01:39 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:16.730 21:01:39 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:16.730 21:01:39 -- setup/common.sh@40 -- # local part_no=1 00:05:16.730 21:01:39 -- setup/common.sh@41 -- # local size=1073741824 00:05:16.730 21:01:39 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:16.730 21:01:39 -- setup/common.sh@44 -- # parts=() 00:05:16.730 21:01:39 -- setup/common.sh@44 -- # local parts 00:05:16.730 21:01:39 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:16.730 21:01:39 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.730 21:01:39 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:16.730 21:01:39 -- setup/common.sh@46 -- # (( part++ )) 00:05:16.730 21:01:39 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.730 21:01:39 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:16.730 21:01:39 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:16.730 21:01:39 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:17.665 Creating new GPT entries in memory. 00:05:17.665 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:17.665 other utilities. 00:05:17.665 21:01:40 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:17.665 21:01:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.665 21:01:40 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:17.665 21:01:40 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:17.665 21:01:40 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:19.052 Creating new GPT entries in memory. 00:05:19.052 The operation has completed successfully. 00:05:19.052 21:01:41 -- setup/common.sh@57 -- # (( part++ )) 00:05:19.052 21:01:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.052 21:01:41 -- setup/common.sh@62 -- # wait 110367 00:05:19.052 21:01:41 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.052 21:01:41 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:19.052 21:01:41 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.052 21:01:41 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:19.052 21:01:41 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:19.052 21:01:41 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.052 21:01:41 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:19.052 21:01:41 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:19.052 21:01:41 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:19.052 21:01:41 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.052 21:01:41 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:19.052 21:01:41 -- setup/devices.sh@53 -- # local found=0 00:05:19.052 21:01:41 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:19.052 21:01:41 -- setup/devices.sh@56 -- # : 00:05:19.052 21:01:41 -- setup/devices.sh@59 -- # local pci status 00:05:19.052 21:01:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.052 21:01:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:19.052 21:01:41 -- setup/devices.sh@47 -- # setup output config 00:05:19.052 21:01:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.052 21:01:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:19.052 21:01:41 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:19.052 21:01:41 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:19.052 21:01:41 -- setup/devices.sh@63 -- # found=1 00:05:19.052 21:01:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.052 21:01:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:19.052 21:01:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.052 21:01:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:19.052 21:01:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.471 21:01:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:20.471 21:01:42 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:20.471 21:01:42 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.471 21:01:42 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:20.471 21:01:42 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:20.471 21:01:42 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:20.471 21:01:42 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.471 21:01:42 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.471 21:01:42 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:20.471 21:01:42 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:20.471 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:20.471 21:01:42 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:20.471 21:01:42 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:20.471 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:20.471 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:20.471 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:20.471 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:20.471 21:01:42 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:20.471 21:01:42 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:20.471 21:01:42 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.472 21:01:42 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:20.472 21:01:42 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:20.472 21:01:42 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.472 21:01:42 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:20.472 21:01:42 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:20.472 21:01:42 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:20.472 21:01:42 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.472 21:01:42 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:20.472 21:01:42 -- setup/devices.sh@53 -- # local found=0 00:05:20.472 21:01:42 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:20.472 21:01:42 -- setup/devices.sh@56 -- # : 00:05:20.472 21:01:42 -- setup/devices.sh@59 -- # local pci status 00:05:20.472 21:01:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.472 21:01:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:20.472 21:01:42 -- setup/devices.sh@47 -- # setup output config 00:05:20.472 21:01:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.472 21:01:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:20.472 21:01:43 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:20.472 21:01:43 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:20.472 21:01:43 -- setup/devices.sh@63 -- # found=1 00:05:20.472 21:01:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.472 21:01:43 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:20.472 21:01:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.472 21:01:43 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:20.472 21:01:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.373 21:01:44 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.373 21:01:44 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:22.373 21:01:44 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.373 21:01:44 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:22.373 21:01:44 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:22.373 21:01:44 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.373 21:01:44 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:22.373 21:01:44 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:22.373 21:01:44 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:22.373 21:01:44 -- setup/devices.sh@50 -- # local mount_point= 00:05:22.373 21:01:44 -- setup/devices.sh@51 -- # local test_file= 00:05:22.373 21:01:44 -- setup/devices.sh@53 -- # local found=0 00:05:22.373 21:01:44 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:22.373 21:01:44 -- setup/devices.sh@59 -- # local pci status 00:05:22.373 21:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.373 21:01:44 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:22.373 21:01:44 -- setup/devices.sh@47 -- # setup output config 00:05:22.373 21:01:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.373 21:01:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.373 21:01:44 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.373 21:01:44 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:22.373 21:01:44 -- setup/devices.sh@63 -- # found=1 00:05:22.373 21:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.373 21:01:44 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.373 21:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.373 21:01:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.373 21:01:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.748 21:01:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.748 21:01:46 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:23.748 21:01:46 -- setup/devices.sh@68 -- # return 0 00:05:23.748 21:01:46 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:23.748 21:01:46 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.748 21:01:46 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.748 21:01:46 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.748 21:01:46 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.748 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.748 00:05:23.748 real 0m7.217s 00:05:23.748 user 0m0.751s 00:05:23.748 sys 0m4.313s 00:05:23.748 21:01:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.748 21:01:46 -- common/autotest_common.sh@10 -- # set +x 00:05:23.748 ************************************ 00:05:23.748 END TEST nvme_mount 00:05:23.748 ************************************ 00:05:23.748 21:01:46 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:23.748 21:01:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.748 21:01:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.748 21:01:46 -- common/autotest_common.sh@10 -- # set +x 00:05:24.006 ************************************ 00:05:24.006 START TEST dm_mount 00:05:24.006 ************************************ 00:05:24.006 21:01:46 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:24.006 21:01:46 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:24.006 21:01:46 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:24.006 21:01:46 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:24.006 21:01:46 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:24.006 21:01:46 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:24.006 21:01:46 -- setup/common.sh@40 -- # local part_no=2 00:05:24.006 21:01:46 -- setup/common.sh@41 -- # local size=1073741824 00:05:24.006 21:01:46 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:24.006 21:01:46 -- setup/common.sh@44 -- # parts=() 00:05:24.006 21:01:46 -- setup/common.sh@44 -- # local parts 00:05:24.006 21:01:46 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:24.006 21:01:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:24.006 21:01:46 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:24.006 21:01:46 -- setup/common.sh@46 -- # (( part++ )) 00:05:24.006 21:01:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:24.006 21:01:46 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:24.006 21:01:46 -- setup/common.sh@46 -- # (( part++ )) 00:05:24.006 21:01:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:24.006 21:01:46 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:24.006 21:01:46 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:24.006 21:01:46 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:24.942 Creating new GPT entries in memory. 00:05:24.942 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:24.942 other utilities. 00:05:24.942 21:01:47 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:24.942 21:01:47 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.942 21:01:47 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:24.942 21:01:47 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:24.943 21:01:47 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:25.878 Creating new GPT entries in memory. 00:05:25.878 The operation has completed successfully. 00:05:25.878 21:01:48 -- setup/common.sh@57 -- # (( part++ )) 00:05:25.878 21:01:48 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:25.878 21:01:48 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:25.878 21:01:48 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:25.878 21:01:48 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:27.255 The operation has completed successfully. 00:05:27.255 21:01:49 -- setup/common.sh@57 -- # (( part++ )) 00:05:27.255 21:01:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:27.255 21:01:49 -- setup/common.sh@62 -- # wait 110888 00:05:27.255 21:01:49 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:27.255 21:01:49 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:27.255 21:01:49 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:27.255 21:01:49 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:27.255 21:01:49 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:27.255 21:01:49 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:27.255 21:01:49 -- setup/devices.sh@161 -- # break 00:05:27.255 21:01:49 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:27.255 21:01:49 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:27.255 21:01:49 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:27.255 21:01:49 -- setup/devices.sh@166 -- # dm=dm-0 00:05:27.255 21:01:49 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:27.255 21:01:49 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:27.255 21:01:49 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:27.255 21:01:49 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:27.255 21:01:49 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:27.255 21:01:49 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:27.255 21:01:49 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:27.255 21:01:49 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:27.255 21:01:49 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:27.255 21:01:49 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:27.255 21:01:49 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:27.255 21:01:49 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:27.255 21:01:49 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:27.255 21:01:49 -- setup/devices.sh@53 -- # local found=0 00:05:27.255 21:01:49 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:27.255 21:01:49 -- setup/devices.sh@56 -- # : 00:05:27.255 21:01:49 -- setup/devices.sh@59 -- # local pci status 00:05:27.255 21:01:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.255 21:01:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:27.255 21:01:49 -- setup/devices.sh@47 -- # setup output config 00:05:27.255 21:01:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.255 21:01:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:27.255 21:01:49 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.255 21:01:49 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:27.255 21:01:49 -- setup/devices.sh@63 -- # found=1 00:05:27.255 21:01:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.255 21:01:49 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.255 21:01:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.520 21:01:50 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.520 21:01:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.487 21:01:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.487 21:01:51 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:28.487 21:01:51 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.487 21:01:51 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:28.487 21:01:51 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:28.487 21:01:51 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.487 21:01:51 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:28.487 21:01:51 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:28.487 21:01:51 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:28.487 21:01:51 -- setup/devices.sh@50 -- # local mount_point= 00:05:28.487 21:01:51 -- setup/devices.sh@51 -- # local test_file= 00:05:28.487 21:01:51 -- setup/devices.sh@53 -- # local found=0 00:05:28.487 21:01:51 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:28.487 21:01:51 -- setup/devices.sh@59 -- # local pci status 00:05:28.487 21:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.487 21:01:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:28.487 21:01:51 -- setup/devices.sh@47 -- # setup output config 00:05:28.487 21:01:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.487 21:01:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.745 21:01:51 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.745 21:01:51 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:28.745 21:01:51 -- setup/devices.sh@63 -- # found=1 00:05:28.745 21:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.745 21:01:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.745 21:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.745 21:01:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.745 21:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.123 21:01:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.123 21:01:52 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:30.123 21:01:52 -- setup/devices.sh@68 -- # return 0 00:05:30.123 21:01:52 -- setup/devices.sh@187 -- # cleanup_dm 00:05:30.123 21:01:52 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.123 21:01:52 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:30.123 21:01:52 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:30.123 21:01:52 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:30.123 21:01:52 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:30.123 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:30.123 21:01:52 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:30.123 21:01:52 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:30.123 00:05:30.123 real 0m6.060s 00:05:30.123 user 0m0.498s 00:05:30.123 sys 0m2.331s 00:05:30.123 21:01:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.123 21:01:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.123 ************************************ 00:05:30.123 END TEST dm_mount 00:05:30.123 ************************************ 00:05:30.123 21:01:52 -- setup/devices.sh@1 -- # cleanup 00:05:30.123 21:01:52 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:30.124 21:01:52 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.124 21:01:52 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:30.124 21:01:52 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:30.124 21:01:52 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:30.124 21:01:52 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:30.124 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:30.124 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:30.124 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:30.124 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:30.124 21:01:52 -- setup/devices.sh@12 -- # cleanup_dm 00:05:30.124 21:01:52 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.124 21:01:52 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:30.124 21:01:52 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:30.124 21:01:52 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:30.124 21:01:52 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:30.124 21:01:52 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:30.124 ************************************ 00:05:30.124 END TEST devices 00:05:30.124 ************************************ 00:05:30.124 00:05:30.124 real 0m14.081s 00:05:30.124 user 0m1.693s 00:05:30.124 sys 0m6.935s 00:05:30.124 21:01:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.124 21:01:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.124 00:05:30.124 real 0m28.629s 00:05:30.124 user 0m6.091s 00:05:30.124 sys 0m17.193s 00:05:30.124 21:01:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.124 21:01:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.124 ************************************ 00:05:30.124 END TEST setup.sh 00:05:30.124 ************************************ 00:05:30.124 21:01:52 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:30.383 Hugepages 00:05:30.383 node hugesize free / total 00:05:30.383 node0 1048576kB 0 / 0 00:05:30.383 node0 2048kB 2048 / 2048 00:05:30.383 00:05:30.383 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:30.383 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:30.383 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:30.383 21:01:53 -- spdk/autotest.sh@141 -- # uname -s 00:05:30.383 21:01:53 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:30.383 21:01:53 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:30.383 21:01:53 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:30.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:30.949 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.852 21:01:55 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:33.418 21:01:56 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:33.418 21:01:56 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:33.418 21:01:56 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:33.418 21:01:56 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:33.418 21:01:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:33.418 21:01:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:33.418 21:01:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:33.418 21:01:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:33.419 21:01:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:33.677 21:01:56 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:33.677 21:01:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:05:33.677 21:01:56 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:33.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:33.936 Waiting for block devices as requested 00:05:33.936 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:33.936 21:01:56 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:33.936 21:01:56 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:33.936 21:01:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:33.936 21:01:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:33.936 21:01:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:33.936 21:01:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:33.936 21:01:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:33.936 21:01:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:33.936 21:01:56 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:33.936 21:01:56 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:33.936 21:01:56 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:33.936 21:01:56 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:33.936 21:01:56 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:33.936 21:01:56 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:33.936 21:01:56 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:33.936 21:01:56 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:33.936 21:01:56 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:33.936 21:01:56 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:33.936 21:01:56 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:34.194 21:01:56 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:34.194 21:01:56 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:34.194 21:01:56 -- common/autotest_common.sh@1542 -- # continue 00:05:34.194 21:01:56 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:34.194 21:01:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:34.194 21:01:56 -- common/autotest_common.sh@10 -- # set +x 00:05:34.194 21:01:56 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:34.194 21:01:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:34.194 21:01:56 -- common/autotest_common.sh@10 -- # set +x 00:05:34.194 21:01:56 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:34.453 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:34.453 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.831 21:01:58 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:35.831 21:01:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:35.831 21:01:58 -- common/autotest_common.sh@10 -- # set +x 00:05:35.831 21:01:58 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:35.831 21:01:58 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:35.831 21:01:58 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:35.831 21:01:58 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:35.831 21:01:58 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:35.831 21:01:58 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:35.831 21:01:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:35.831 21:01:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:35.831 21:01:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:35.831 21:01:58 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:35.831 21:01:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:35.831 21:01:58 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:35.831 21:01:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:05:35.831 21:01:58 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:35.831 21:01:58 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:35.831 21:01:58 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:35.831 21:01:58 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:35.831 21:01:58 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:35.831 21:01:58 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:35.831 21:01:58 -- common/autotest_common.sh@1578 -- # return 0 00:05:35.831 21:01:58 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:05:35.831 21:01:58 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:35.831 21:01:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:35.831 21:01:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.831 21:01:58 -- common/autotest_common.sh@10 -- # set +x 00:05:35.831 ************************************ 00:05:35.831 START TEST unittest 00:05:35.831 ************************************ 00:05:35.831 21:01:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:35.831 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:35.831 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:35.831 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:35.831 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:35.831 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:35.831 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:35.831 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:35.831 ++ rpc_py=rpc_cmd 00:05:35.831 ++ set -e 00:05:35.831 ++ shopt -s nullglob 00:05:35.831 ++ shopt -s extglob 00:05:35.831 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:35.831 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:35.831 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:35.831 +++ CONFIG_FIO_PLUGIN=y 00:05:35.831 +++ CONFIG_NVME_CUSE=y 00:05:35.831 +++ CONFIG_RAID5F=y 00:05:35.831 +++ CONFIG_LTO=n 00:05:35.832 +++ CONFIG_SMA=n 00:05:35.832 +++ CONFIG_ISAL=y 00:05:35.832 +++ CONFIG_OPENSSL_PATH= 00:05:35.832 +++ CONFIG_IDXD_KERNEL=n 00:05:35.832 +++ CONFIG_URING_PATH= 00:05:35.832 +++ CONFIG_DAOS=n 00:05:35.832 +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:05:35.832 +++ CONFIG_OCF=n 00:05:35.832 +++ CONFIG_EXAMPLES=y 00:05:35.832 +++ CONFIG_RDMA_PROV=verbs 00:05:35.832 +++ CONFIG_ISCSI_INITIATOR=y 00:05:35.832 +++ CONFIG_VTUNE=n 00:05:35.832 +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:05:35.832 +++ CONFIG_CET=n 00:05:35.832 +++ CONFIG_TESTS=y 00:05:35.832 +++ CONFIG_APPS=y 00:05:35.832 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:35.832 +++ CONFIG_DAOS_DIR= 00:05:35.832 +++ CONFIG_CRYPTO_MLX5=n 00:05:35.832 +++ CONFIG_XNVME=n 00:05:35.832 +++ CONFIG_UNIT_TESTS=y 00:05:35.832 +++ CONFIG_FUSE=n 00:05:35.832 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:35.832 +++ CONFIG_OCF_PATH= 00:05:35.832 +++ CONFIG_WPDK_DIR= 00:05:35.832 +++ CONFIG_VFIO_USER=n 00:05:35.832 +++ CONFIG_MAX_LCORES= 00:05:35.832 +++ CONFIG_ARCH=native 00:05:35.832 +++ CONFIG_TSAN=n 00:05:35.832 +++ CONFIG_VIRTIO=y 00:05:35.832 +++ CONFIG_IPSEC_MB=n 00:05:35.832 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:35.832 +++ CONFIG_ASAN=y 00:05:35.832 +++ CONFIG_SHARED=n 00:05:35.832 +++ CONFIG_VTUNE_DIR= 00:05:35.832 +++ CONFIG_RDMA_SET_TOS=y 00:05:35.832 +++ CONFIG_VBDEV_COMPRESS=n 00:05:35.832 +++ CONFIG_VFIO_USER_DIR= 00:05:35.832 +++ CONFIG_FUZZER_LIB= 00:05:35.832 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:35.832 +++ CONFIG_USDT=n 00:05:35.832 +++ CONFIG_URING_ZNS=n 00:05:35.832 +++ CONFIG_FC_PATH= 00:05:35.832 +++ CONFIG_COVERAGE=y 00:05:35.832 +++ CONFIG_CUSTOMOCF=n 00:05:35.832 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:35.832 +++ CONFIG_WERROR=y 00:05:35.832 +++ CONFIG_DEBUG=y 00:05:35.832 +++ CONFIG_RDMA=y 00:05:35.832 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:35.832 +++ CONFIG_FUZZER=n 00:05:35.832 +++ CONFIG_FC=n 00:05:35.832 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:05:35.832 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:35.832 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:35.832 +++ CONFIG_CROSS_PREFIX= 00:05:35.832 +++ CONFIG_PREFIX=/usr/local 00:05:35.832 +++ CONFIG_HAVE_LIBBSD=n 00:05:35.832 +++ CONFIG_UBSAN=y 00:05:35.832 +++ CONFIG_PGO_CAPTURE=n 00:05:35.832 +++ CONFIG_UBLK=n 00:05:35.832 +++ CONFIG_ISAL_CRYPTO=y 00:05:35.832 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:35.832 +++ CONFIG_CRYPTO=n 00:05:35.832 +++ CONFIG_RBD=n 00:05:35.832 +++ CONFIG_LIBDIR= 00:05:35.832 +++ CONFIG_IPSEC_MB_DIR= 00:05:35.832 +++ CONFIG_PGO_USE=n 00:05:35.832 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:35.832 +++ CONFIG_GOLANG=n 00:05:35.832 +++ CONFIG_VHOST=y 00:05:35.832 +++ CONFIG_IDXD=y 00:05:35.832 +++ CONFIG_AVAHI=n 00:05:35.832 +++ CONFIG_URING=n 00:05:35.832 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:35.832 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:35.832 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:35.832 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:35.832 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:35.832 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:35.832 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:35.832 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:35.832 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:35.832 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:35.832 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:35.832 +++ VHOST_APP=("$_app_dir/vhost") 00:05:35.832 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:35.832 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:35.832 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:35.832 +++ [[ #ifndef SPDK_CONFIG_H 00:05:35.832 #define SPDK_CONFIG_H 00:05:35.832 #define SPDK_CONFIG_APPS 1 00:05:35.832 #define SPDK_CONFIG_ARCH native 00:05:35.832 #define SPDK_CONFIG_ASAN 1 00:05:35.832 #undef SPDK_CONFIG_AVAHI 00:05:35.832 #undef SPDK_CONFIG_CET 00:05:35.832 #define SPDK_CONFIG_COVERAGE 1 00:05:35.832 #define SPDK_CONFIG_CROSS_PREFIX 00:05:35.832 #undef SPDK_CONFIG_CRYPTO 00:05:35.832 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:35.832 #undef SPDK_CONFIG_CUSTOMOCF 00:05:35.832 #undef SPDK_CONFIG_DAOS 00:05:35.832 #define SPDK_CONFIG_DAOS_DIR 00:05:35.832 #define SPDK_CONFIG_DEBUG 1 00:05:35.832 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:35.832 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:05:35.832 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:05:35.832 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:05:35.832 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:35.832 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:35.832 #define SPDK_CONFIG_EXAMPLES 1 00:05:35.832 #undef SPDK_CONFIG_FC 00:05:35.832 #define SPDK_CONFIG_FC_PATH 00:05:35.832 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:35.832 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:35.832 #undef SPDK_CONFIG_FUSE 00:05:35.832 #undef SPDK_CONFIG_FUZZER 00:05:35.832 #define SPDK_CONFIG_FUZZER_LIB 00:05:35.832 #undef SPDK_CONFIG_GOLANG 00:05:35.832 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:35.832 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:35.832 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:35.832 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:35.832 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:35.832 #define SPDK_CONFIG_IDXD 1 00:05:35.832 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:35.832 #undef SPDK_CONFIG_IPSEC_MB 00:05:35.832 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:35.832 #define SPDK_CONFIG_ISAL 1 00:05:35.832 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:35.832 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:35.832 #define SPDK_CONFIG_LIBDIR 00:05:35.832 #undef SPDK_CONFIG_LTO 00:05:35.832 #define SPDK_CONFIG_MAX_LCORES 00:05:35.832 #define SPDK_CONFIG_NVME_CUSE 1 00:05:35.832 #undef SPDK_CONFIG_OCF 00:05:35.832 #define SPDK_CONFIG_OCF_PATH 00:05:35.832 #define SPDK_CONFIG_OPENSSL_PATH 00:05:35.832 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:35.832 #undef SPDK_CONFIG_PGO_USE 00:05:35.832 #define SPDK_CONFIG_PREFIX /usr/local 00:05:35.832 #define SPDK_CONFIG_RAID5F 1 00:05:35.832 #undef SPDK_CONFIG_RBD 00:05:35.832 #define SPDK_CONFIG_RDMA 1 00:05:35.832 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:35.832 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:35.832 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:35.832 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:35.832 #undef SPDK_CONFIG_SHARED 00:05:35.832 #undef SPDK_CONFIG_SMA 00:05:35.832 #define SPDK_CONFIG_TESTS 1 00:05:35.832 #undef SPDK_CONFIG_TSAN 00:05:35.832 #undef SPDK_CONFIG_UBLK 00:05:35.832 #define SPDK_CONFIG_UBSAN 1 00:05:35.832 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:35.832 #undef SPDK_CONFIG_URING 00:05:35.832 #define SPDK_CONFIG_URING_PATH 00:05:35.832 #undef SPDK_CONFIG_URING_ZNS 00:05:35.832 #undef SPDK_CONFIG_USDT 00:05:35.832 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:35.832 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:35.832 #undef SPDK_CONFIG_VFIO_USER 00:05:35.832 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:35.832 #define SPDK_CONFIG_VHOST 1 00:05:35.832 #define SPDK_CONFIG_VIRTIO 1 00:05:35.832 #undef SPDK_CONFIG_VTUNE 00:05:35.832 #define SPDK_CONFIG_VTUNE_DIR 00:05:35.832 #define SPDK_CONFIG_WERROR 1 00:05:35.832 #define SPDK_CONFIG_WPDK_DIR 00:05:35.832 #undef SPDK_CONFIG_XNVME 00:05:35.832 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:35.832 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:35.832 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.832 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:35.832 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.832 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.832 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:35.832 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:35.833 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:35.833 ++++ export PATH 00:05:35.833 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:35.833 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:35.833 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:35.833 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:35.833 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:35.833 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:35.833 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:35.833 +++ TEST_TAG=N/A 00:05:35.833 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:35.833 ++ : 1 00:05:35.833 ++ export RUN_NIGHTLY 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_RUN_VALGRIND 00:05:35.833 ++ : 1 00:05:35.833 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:35.833 ++ : 1 00:05:35.833 ++ export SPDK_TEST_UNITTEST 00:05:35.833 ++ : 00:05:35.833 ++ export SPDK_TEST_AUTOBUILD 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_RELEASE_BUILD 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_ISAL 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_ISCSI 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:35.833 ++ : 1 00:05:35.833 ++ export SPDK_TEST_NVME 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_NVME_PMR 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_NVME_BP 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_NVME_CLI 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_NVME_CUSE 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_NVME_FDP 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_NVMF 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_VFIOUSER 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_FUZZER 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_FUZZER_SHORT 00:05:35.833 ++ : rdma 00:05:35.833 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_RBD 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_VHOST 00:05:35.833 ++ : 1 00:05:35.833 ++ export SPDK_TEST_BLOCKDEV 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_IOAT 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_BLOBFS 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_VHOST_INIT 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_LVOL 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:35.833 ++ : 1 00:05:35.833 ++ export SPDK_RUN_ASAN 00:05:35.833 ++ : 1 00:05:35.833 ++ export SPDK_RUN_UBSAN 00:05:35.833 ++ : /home/vagrant/spdk_repo/dpdk/build 00:05:35.833 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_RUN_NON_ROOT 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_CRYPTO 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_FTL 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_OCF 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_VMD 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_OPAL 00:05:35.833 ++ : v23.11 00:05:35.833 ++ export SPDK_TEST_NATIVE_DPDK 00:05:35.833 ++ : true 00:05:35.833 ++ export SPDK_AUTOTEST_X 00:05:35.833 ++ : 1 00:05:35.833 ++ export SPDK_TEST_RAID5 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_URING 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_USDT 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_USE_IGB_UIO 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_SCHEDULER 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_SCANBUILD 00:05:35.833 ++ : 00:05:35.833 ++ export SPDK_TEST_NVMF_NICS 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_SMA 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_DAOS 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_XNVME 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_ACCEL_DSA 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_ACCEL_IAA 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_ACCEL_IOAT 00:05:35.833 ++ : 00:05:35.833 ++ export SPDK_TEST_FUZZER_TARGET 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_TEST_NVMF_MDNS 00:05:35.833 ++ : 0 00:05:35.833 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:35.833 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:35.833 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:35.833 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:05:35.833 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:05:35.833 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:35.833 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:35.833 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:35.833 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:35.833 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:35.833 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:35.833 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:35.833 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:35.833 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:35.833 ++ PYTHONDONTWRITEBYTECODE=1 00:05:35.833 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:35.833 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:35.833 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:35.833 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:35.833 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:35.833 ++ rm -rf /var/tmp/asan_suppression_file 00:05:35.833 ++ cat 00:05:35.833 ++ echo leak:libfuse3.so 00:05:35.833 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:35.833 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:35.833 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:35.833 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:35.833 ++ '[' -z /var/spdk/dependencies ']' 00:05:35.833 ++ export DEPENDENCY_DIR 00:05:35.833 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:35.833 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:35.833 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:35.833 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:35.833 ++ export QEMU_BIN= 00:05:35.833 ++ QEMU_BIN= 00:05:35.833 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:35.833 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:35.833 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:35.833 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:35.833 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:35.834 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:35.834 ++ '[' 0 -eq 0 ']' 00:05:35.834 ++ export valgrind= 00:05:35.834 ++ valgrind= 00:05:35.834 +++ uname -s 00:05:35.834 ++ '[' Linux = Linux ']' 00:05:35.834 ++ HUGEMEM=4096 00:05:35.834 ++ export CLEAR_HUGE=yes 00:05:35.834 ++ CLEAR_HUGE=yes 00:05:35.834 ++ [[ 0 -eq 1 ]] 00:05:35.834 ++ [[ 0 -eq 1 ]] 00:05:35.834 ++ MAKE=make 00:05:35.834 +++ nproc 00:05:35.834 ++ MAKEFLAGS=-j10 00:05:35.834 ++ export HUGEMEM=4096 00:05:35.834 ++ HUGEMEM=4096 00:05:35.834 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:35.834 ++ NO_HUGE=() 00:05:35.834 ++ TEST_MODE= 00:05:35.834 ++ [[ -z '' ]] 00:05:35.834 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:35.834 ++ exec 00:05:35.834 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:35.834 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:35.834 ++ set_test_storage 2147483648 00:05:35.834 ++ [[ -v testdir ]] 00:05:35.834 ++ local requested_size=2147483648 00:05:35.834 ++ local mount target_dir 00:05:35.834 ++ local -A mounts fss sizes avails uses 00:05:35.834 ++ local source fs size avail mount use 00:05:35.834 ++ local storage_fallback storage_candidates 00:05:35.834 +++ mktemp -udt spdk.XXXXXX 00:05:35.834 ++ storage_fallback=/tmp/spdk.0EIgeQ 00:05:35.834 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:35.834 ++ [[ -n '' ]] 00:05:35.834 ++ [[ -n '' ]] 00:05:35.834 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.0EIgeQ/tests/unit /tmp/spdk.0EIgeQ 00:05:35.834 ++ requested_size=2214592512 00:05:35.834 ++ read -r source fs size use avail _ mount 00:05:35.834 +++ df -T 00:05:35.834 +++ grep -v Filesystem 00:05:35.834 ++ mounts["$mount"]=udev 00:05:35.834 ++ fss["$mount"]=devtmpfs 00:05:35.834 ++ avails["$mount"]=6224465920 00:05:35.834 ++ sizes["$mount"]=6224465920 00:05:35.834 ++ uses["$mount"]=0 00:05:35.834 ++ read -r source fs size use avail _ mount 00:05:35.834 ++ mounts["$mount"]=tmpfs 00:05:35.834 ++ fss["$mount"]=tmpfs 00:05:35.834 ++ avails["$mount"]=1253408768 00:05:35.834 ++ sizes["$mount"]=1254514688 00:05:35.834 ++ uses["$mount"]=1105920 00:05:35.834 ++ read -r source fs size use avail _ mount 00:05:35.834 ++ mounts["$mount"]=/dev/vda1 00:05:35.834 ++ fss["$mount"]=ext4 00:05:35.834 ++ avails["$mount"]=8714534912 00:05:35.834 ++ sizes["$mount"]=20616794112 00:05:35.834 ++ uses["$mount"]=11885481984 00:05:35.834 ++ read -r source fs size use avail _ mount 00:05:35.834 ++ mounts["$mount"]=tmpfs 00:05:35.834 ++ fss["$mount"]=tmpfs 00:05:35.834 ++ avails["$mount"]=6272565248 00:05:35.834 ++ sizes["$mount"]=6272565248 00:05:35.834 ++ uses["$mount"]=0 00:05:35.834 ++ read -r source fs size use avail _ mount 00:05:35.834 ++ mounts["$mount"]=tmpfs 00:05:35.834 ++ fss["$mount"]=tmpfs 00:05:35.834 ++ avails["$mount"]=5242880 00:05:35.834 ++ sizes["$mount"]=5242880 00:05:35.834 ++ uses["$mount"]=0 00:05:35.834 ++ read -r source fs size use avail _ mount 00:05:35.834 ++ mounts["$mount"]=tmpfs 00:05:35.834 ++ fss["$mount"]=tmpfs 00:05:35.834 ++ avails["$mount"]=6272565248 00:05:35.834 ++ sizes["$mount"]=6272565248 00:05:35.834 ++ uses["$mount"]=0 00:05:35.834 ++ read -r source fs size use avail _ mount 00:05:35.834 ++ mounts["$mount"]=/dev/loop0 00:05:35.834 ++ fss["$mount"]=squashfs 00:05:35.834 ++ avails["$mount"]=0 00:05:35.834 ++ sizes["$mount"]=67108864 00:05:35.834 ++ uses["$mount"]=67108864 00:05:35.834 ++ read -r source fs size use avail _ mount 00:05:35.834 ++ mounts["$mount"]=/dev/vda15 00:05:35.834 ++ fss["$mount"]=vfat 00:05:35.834 ++ avails["$mount"]=103089152 00:05:35.834 ++ sizes["$mount"]=109422592 00:05:35.834 ++ uses["$mount"]=6334464 00:05:35.834 ++ read -r source fs size use avail _ mount 00:05:35.834 ++ mounts["$mount"]=/dev/loop1 00:05:35.834 ++ fss["$mount"]=squashfs 00:05:35.834 ++ avails["$mount"]=0 00:05:35.834 ++ sizes["$mount"]=96337920 00:05:35.834 ++ uses["$mount"]=96337920 00:05:35.834 ++ read -r source fs size use avail _ mount 00:05:35.834 ++ mounts["$mount"]=/dev/loop2 00:05:35.834 ++ fss["$mount"]=squashfs 00:05:35.834 ++ avails["$mount"]=0 00:05:35.834 ++ sizes["$mount"]=41025536 00:05:35.834 ++ uses["$mount"]=41025536 00:05:35.834 ++ read -r source fs size use avail _ mount 00:05:35.834 ++ mounts["$mount"]=tmpfs 00:05:35.834 ++ fss["$mount"]=tmpfs 00:05:35.834 ++ avails["$mount"]=1254510592 00:05:35.834 ++ sizes["$mount"]=1254510592 00:05:35.834 ++ uses["$mount"]=0 00:05:35.834 ++ read -r source fs size use avail _ mount 00:05:35.834 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:05:35.834 ++ fss["$mount"]=fuse.sshfs 00:05:35.834 ++ avails["$mount"]=94234771456 00:05:35.834 ++ sizes["$mount"]=105088212992 00:05:35.834 ++ uses["$mount"]=5468008448 00:05:35.834 ++ read -r source fs size use avail _ mount 00:05:35.834 ++ printf '* Looking for test storage...\n' 00:05:35.834 * Looking for test storage... 00:05:35.834 ++ local target_space new_size 00:05:35.834 ++ for target_dir in "${storage_candidates[@]}" 00:05:35.834 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:35.834 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:35.834 ++ mount=/ 00:05:35.834 ++ target_space=8714534912 00:05:35.834 ++ (( target_space == 0 || target_space < requested_size )) 00:05:35.834 ++ (( target_space >= requested_size )) 00:05:35.834 ++ [[ ext4 == tmpfs ]] 00:05:35.834 ++ [[ ext4 == ramfs ]] 00:05:35.834 ++ [[ / == / ]] 00:05:35.834 ++ new_size=14100074496 00:05:35.834 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:35.834 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:35.834 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:35.834 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:35.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:35.834 ++ return 0 00:05:35.834 ++ set -o errtrace 00:05:35.834 ++ shopt -s extdebug 00:05:35.834 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:35.834 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:35.834 21:01:58 -- common/autotest_common.sh@1672 -- # true 00:05:35.834 21:01:58 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:05:35.834 21:01:58 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:35.834 21:01:58 -- common/autotest_common.sh@29 -- # exec 00:05:35.834 21:01:58 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:35.834 21:01:58 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:35.834 21:01:58 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:35.834 21:01:58 -- common/autotest_common.sh@18 -- # set -x 00:05:35.834 21:01:58 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:35.834 21:01:58 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:05:35.834 21:01:58 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:05:35.834 21:01:58 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:05:35.834 21:01:58 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:35.834 21:01:58 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:05:35.834 21:01:58 -- unit/unittest.sh@179 -- # hash lcov 00:05:35.835 21:01:58 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:35.835 21:01:58 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:35.835 21:01:58 -- unit/unittest.sh@180 -- # cov_avail=yes 00:05:35.835 21:01:58 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:05:35.835 21:01:58 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:35.835 21:01:58 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:35.835 21:01:58 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:35.835 21:01:58 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:05:35.835 --rc lcov_branch_coverage=1 00:05:35.835 --rc lcov_function_coverage=1 00:05:35.835 --rc genhtml_branch_coverage=1 00:05:35.835 --rc genhtml_function_coverage=1 00:05:35.835 --rc genhtml_legend=1 00:05:35.835 --rc geninfo_all_blocks=1 00:05:35.835 ' 00:05:35.835 21:01:58 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:05:35.835 --rc lcov_branch_coverage=1 00:05:35.835 --rc lcov_function_coverage=1 00:05:35.835 --rc genhtml_branch_coverage=1 00:05:35.835 --rc genhtml_function_coverage=1 00:05:35.835 --rc genhtml_legend=1 00:05:35.835 --rc geninfo_all_blocks=1 00:05:35.835 ' 00:05:35.835 21:01:58 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:05:35.835 --rc lcov_branch_coverage=1 00:05:35.835 --rc lcov_function_coverage=1 00:05:35.835 --rc genhtml_branch_coverage=1 00:05:35.835 --rc genhtml_function_coverage=1 00:05:35.835 --rc genhtml_legend=1 00:05:35.835 --rc geninfo_all_blocks=1 00:05:35.835 --no-external' 00:05:35.835 21:01:58 -- unit/unittest.sh@200 -- # LCOV='lcov 00:05:35.835 --rc lcov_branch_coverage=1 00:05:35.835 --rc lcov_function_coverage=1 00:05:35.835 --rc genhtml_branch_coverage=1 00:05:35.835 --rc genhtml_function_coverage=1 00:05:35.835 --rc genhtml_legend=1 00:05:35.835 --rc geninfo_all_blocks=1 00:05:35.835 --no-external' 00:05:35.835 21:01:58 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:37.739 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:37.739 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:37.739 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:37.739 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:37.739 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:37.739 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:37.739 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:37.739 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:37.739 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:37.739 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:37.739 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:37.739 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:37.739 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:37.739 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:37.739 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:37.739 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:37.739 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:37.740 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:37.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:37.999 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:37.999 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:38.000 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:38.000 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:38.000 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:38.000 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:38.000 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:24.710 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:06:24.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:24.710 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:06:24.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:06:24.710 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:06:24.710 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:06:24.710 21:02:46 -- unit/unittest.sh@206 -- # uname -m 00:06:24.710 21:02:46 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:06:24.710 21:02:46 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:24.710 21:02:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.710 21:02:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.710 21:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:24.710 ************************************ 00:06:24.710 START TEST unittest_pci_event 00:06:24.710 ************************************ 00:06:24.710 21:02:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:24.710 00:06:24.710 00:06:24.710 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.710 http://cunit.sourceforge.net/ 00:06:24.710 00:06:24.710 00:06:24.710 Suite: pci_event 00:06:24.710 Test: test_pci_parse_event ...[2024-06-07 21:02:46.908311] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:24.710 passed 00:06:24.710 00:06:24.710 [2024-06-07 21:02:46.908769] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:24.710 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.710 suites 1 1 n/a 0 0 00:06:24.710 tests 1 1 1 0 0 00:06:24.710 asserts 15 15 15 0 n/a 00:06:24.710 00:06:24.710 Elapsed time = 0.001 seconds 00:06:24.710 00:06:24.710 real 0m0.036s 00:06:24.710 user 0m0.022s 00:06:24.710 sys 0m0.010s 00:06:24.710 21:02:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.710 21:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:24.710 ************************************ 00:06:24.710 END TEST unittest_pci_event 00:06:24.710 ************************************ 00:06:24.710 21:02:46 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:24.710 21:02:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.710 21:02:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.710 21:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:24.710 ************************************ 00:06:24.710 START TEST unittest_include 00:06:24.710 ************************************ 00:06:24.710 21:02:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:24.710 00:06:24.710 00:06:24.710 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.710 http://cunit.sourceforge.net/ 00:06:24.710 00:06:24.710 00:06:24.710 Suite: histogram 00:06:24.710 Test: histogram_test ...passed 00:06:24.710 Test: histogram_merge ...passed 00:06:24.710 00:06:24.710 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.710 suites 1 1 n/a 0 0 00:06:24.710 tests 2 2 2 0 0 00:06:24.710 asserts 50 50 50 0 n/a 00:06:24.710 00:06:24.710 Elapsed time = 0.005 seconds 00:06:24.710 00:06:24.710 real 0m0.030s 00:06:24.710 user 0m0.024s 00:06:24.710 sys 0m0.006s 00:06:24.710 21:02:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.710 ************************************ 00:06:24.710 END TEST unittest_include 00:06:24.710 ************************************ 00:06:24.710 21:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:24.710 21:02:47 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:06:24.710 21:02:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.710 21:02:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.710 21:02:47 -- common/autotest_common.sh@10 -- # set +x 00:06:24.710 ************************************ 00:06:24.710 START TEST unittest_bdev 00:06:24.710 ************************************ 00:06:24.710 21:02:47 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:06:24.710 21:02:47 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:24.710 00:06:24.710 00:06:24.710 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.710 http://cunit.sourceforge.net/ 00:06:24.710 00:06:24.710 00:06:24.710 Suite: bdev 00:06:24.710 Test: bytes_to_blocks_test ...passed 00:06:24.710 Test: num_blocks_test ...passed 00:06:24.710 Test: io_valid_test ...passed 00:06:24.710 Test: open_write_test ...[2024-06-07 21:02:47.137917] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:24.710 [2024-06-07 21:02:47.138275] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:24.710 [2024-06-07 21:02:47.138383] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:24.710 passed 00:06:24.710 Test: claim_test ...passed 00:06:24.711 Test: alias_add_del_test ...[2024-06-07 21:02:47.231882] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:24.711 [2024-06-07 21:02:47.232041] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:24.711 [2024-06-07 21:02:47.232094] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:24.711 passed 00:06:24.711 Test: get_device_stat_test ...passed 00:06:24.711 Test: bdev_io_types_test ...passed 00:06:24.711 Test: bdev_io_wait_test ...passed 00:06:24.711 Test: bdev_io_spans_split_test ...passed 00:06:24.993 Test: bdev_io_boundary_split_test ...passed 00:06:24.993 Test: bdev_io_max_size_and_segment_split_test ...[2024-06-07 21:02:47.407008] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:24.993 passed 00:06:24.993 Test: bdev_io_mix_split_test ...passed 00:06:24.993 Test: bdev_io_split_with_io_wait ...passed 00:06:24.993 Test: bdev_io_write_unit_split_test ...[2024-06-07 21:02:47.503718] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:24.993 [2024-06-07 21:02:47.503823] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:24.993 [2024-06-07 21:02:47.503852] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:24.993 [2024-06-07 21:02:47.503893] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:24.993 passed 00:06:24.993 Test: bdev_io_alignment_with_boundary ...passed 00:06:24.993 Test: bdev_io_alignment ...passed 00:06:24.993 Test: bdev_histograms ...passed 00:06:24.993 Test: bdev_write_zeroes ...passed 00:06:25.253 Test: bdev_compare_and_write ...passed 00:06:25.253 Test: bdev_compare ...passed 00:06:25.253 Test: bdev_compare_emulated ...passed 00:06:25.512 Test: bdev_zcopy_write ...passed 00:06:25.512 Test: bdev_zcopy_read ...passed 00:06:25.512 Test: bdev_open_while_hotremove ...passed 00:06:25.512 Test: bdev_close_while_hotremove ...passed 00:06:25.512 Test: bdev_open_ext_test ...[2024-06-07 21:02:48.004114] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:25.512 passed 00:06:25.512 Test: bdev_open_ext_unregister ...[2024-06-07 21:02:48.004720] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:25.512 passed 00:06:25.512 Test: bdev_set_io_timeout ...passed 00:06:25.512 Test: bdev_set_qd_sampling ...passed 00:06:25.512 Test: lba_range_overlap ...passed 00:06:25.512 Test: lock_lba_range_check_ranges ...passed 00:06:25.512 Test: lock_lba_range_with_io_outstanding ...passed 00:06:25.771 Test: lock_lba_range_overlapped ...passed 00:06:25.771 Test: bdev_quiesce ...[2024-06-07 21:02:48.205897] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:25.771 passed 00:06:25.771 Test: bdev_io_abort ...passed 00:06:25.771 Test: bdev_unmap ...passed 00:06:25.771 Test: bdev_write_zeroes_split_test ...passed 00:06:25.771 Test: bdev_set_options_test ...passed 00:06:25.771 Test: bdev_get_memory_domains ...[2024-06-07 21:02:48.326792] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:25.771 passed 00:06:25.771 Test: bdev_io_ext ...passed 00:06:25.771 Test: bdev_io_ext_no_opts ...passed 00:06:25.771 Test: bdev_io_ext_invalid_opts ...passed 00:06:26.031 Test: bdev_io_ext_split ...passed 00:06:26.031 Test: bdev_io_ext_bounce_buffer ...passed 00:06:26.031 Test: bdev_register_uuid_alias ...[2024-06-07 21:02:48.510880] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name f1bbda0d-704d-4387-98b6-295ef0fd996b already exists 00:06:26.031 [2024-06-07 21:02:48.510962] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:f1bbda0d-704d-4387-98b6-295ef0fd996b alias for bdev bdev0 00:06:26.031 passed 00:06:26.031 Test: bdev_unregister_by_name ...[2024-06-07 21:02:48.526563] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:26.031 [2024-06-07 21:02:48.526632] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:26.031 passed 00:06:26.031 Test: for_each_bdev_test ...passed 00:06:26.031 Test: bdev_seek_test ...passed 00:06:26.031 Test: bdev_copy ...passed 00:06:26.032 Test: bdev_copy_split_test ...passed 00:06:26.032 Test: examine_locks ...passed 00:06:26.032 Test: claim_v2_rwo ...[2024-06-07 21:02:48.619534] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.619628] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.619656] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.619721] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.619748] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.619807] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:26.032 passed 00:06:26.032 Test: claim_v2_rom ...[2024-06-07 21:02:48.620022] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.620093] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.620127] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.620166] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.620224] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:26.032 [2024-06-07 21:02:48.620267] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:26.032 passed 00:06:26.032 Test: claim_v2_rwm ...[2024-06-07 21:02:48.620438] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:26.032 [2024-06-07 21:02:48.620515] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.620565] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.620604] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.620625] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:26.032 passed 00:06:26.032 Test: claim_v2_existing_writer ...[2024-06-07 21:02:48.620651] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.620702] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:26.032 [2024-06-07 21:02:48.620869] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:26.032 passed 00:06:26.032 Test: claim_v2_existing_v1 ...[2024-06-07 21:02:48.620928] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:26.032 [2024-06-07 21:02:48.621073] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.621110] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.621129] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:26.032 passed 00:06:26.032 Test: claim_v1_existing_v2 ...[2024-06-07 21:02:48.621289] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.621351] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:26.032 [2024-06-07 21:02:48.621394] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:26.032 passed 00:06:26.032 Test: examine_claimed ...[2024-06-07 21:02:48.621738] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:26.032 passed 00:06:26.032 00:06:26.032 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.032 suites 1 1 n/a 0 0 00:06:26.032 tests 59 59 59 0 0 00:06:26.032 asserts 4599 4599 4599 0 n/a 00:06:26.032 00:06:26.032 Elapsed time = 1.540 seconds 00:06:26.032 21:02:48 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:26.032 00:06:26.032 00:06:26.032 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.032 http://cunit.sourceforge.net/ 00:06:26.032 00:06:26.032 00:06:26.032 Suite: nvme 00:06:26.032 Test: test_create_ctrlr ...passed 00:06:26.032 Test: test_reset_ctrlr ...[2024-06-07 21:02:48.671428] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.032 passed 00:06:26.032 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:26.032 Test: test_failover_ctrlr ...passed 00:06:26.032 Test: test_race_between_failover_and_add_secondary_trid ...[2024-06-07 21:02:48.674228] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.032 [2024-06-07 21:02:48.674475] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.032 [2024-06-07 21:02:48.674666] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.032 passed 00:06:26.032 Test: test_pending_reset ...[2024-06-07 21:02:48.676094] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.032 [2024-06-07 21:02:48.676411] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.032 passed 00:06:26.032 Test: test_attach_ctrlr ...[2024-06-07 21:02:48.677583] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:26.032 passed 00:06:26.032 Test: test_aer_cb ...passed 00:06:26.032 Test: test_submit_nvme_cmd ...passed 00:06:26.032 Test: test_add_remove_trid ...passed 00:06:26.032 Test: test_abort ...[2024-06-07 21:02:48.680910] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7221:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:26.032 passed 00:06:26.032 Test: test_get_io_qpair ...passed 00:06:26.032 Test: test_bdev_unregister ...passed 00:06:26.032 Test: test_compare_ns ...passed 00:06:26.032 Test: test_init_ana_log_page ...passed 00:06:26.032 Test: test_get_memory_domains ...passed 00:06:26.032 Test: test_reconnect_qpair ...[2024-06-07 21:02:48.683649] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.032 passed 00:06:26.032 Test: test_create_bdev_ctrlr ...[2024-06-07 21:02:48.684259] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5273:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:26.032 passed 00:06:26.032 Test: test_add_multi_ns_to_bdev ...[2024-06-07 21:02:48.685639] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4486:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:26.032 passed 00:06:26.032 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:26.032 Test: test_admin_path ...passed 00:06:26.032 Test: test_reset_bdev_ctrlr ...passed 00:06:26.032 Test: test_find_io_path ...passed 00:06:26.032 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:26.032 Test: test_retry_io_for_io_path_error ...passed 00:06:26.032 Test: test_retry_io_count ...passed 00:06:26.032 Test: test_concurrent_read_ana_log_page ...passed 00:06:26.032 Test: test_retry_io_for_ana_error ...passed 00:06:26.032 Test: test_check_io_error_resiliency_params ...[2024-06-07 21:02:48.692867] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5926:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:26.032 [2024-06-07 21:02:48.692992] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5930:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:26.032 [2024-06-07 21:02:48.693021] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5939:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:26.032 [2024-06-07 21:02:48.693059] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5942:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:26.032 [2024-06-07 21:02:48.693082] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5954:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:26.033 [2024-06-07 21:02:48.693114] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5954:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:26.033 [2024-06-07 21:02:48.693136] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5934:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:26.033 passed 00:06:26.033 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-06-07 21:02:48.693185] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5949:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:26.033 [2024-06-07 21:02:48.693215] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5946:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:26.033 passed 00:06:26.033 Test: test_reconnect_ctrlr ...[2024-06-07 21:02:48.694109] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.033 [2024-06-07 21:02:48.694251] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.033 [2024-06-07 21:02:48.694511] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.033 [2024-06-07 21:02:48.694663] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.033 [2024-06-07 21:02:48.694809] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.033 passed 00:06:26.033 Test: test_retry_failover_ctrlr ...[2024-06-07 21:02:48.695139] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.033 passed 00:06:26.033 Test: test_fail_path ...[2024-06-07 21:02:48.695702] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.033 [2024-06-07 21:02:48.695837] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.033 [2024-06-07 21:02:48.695987] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.033 [2024-06-07 21:02:48.696098] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.033 [2024-06-07 21:02:48.696238] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.033 passed 00:06:26.033 Test: test_nvme_ns_cmp ...passed 00:06:26.033 Test: test_ana_transition ...passed 00:06:26.033 Test: test_set_preferred_path ...passed 00:06:26.033 Test: test_find_next_io_path ...passed 00:06:26.033 Test: test_find_io_path_min_qd ...passed 00:06:26.033 Test: test_disable_auto_failback ...[2024-06-07 21:02:48.698027] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.033 passed 00:06:26.033 Test: test_set_multipath_policy ...passed 00:06:26.033 Test: test_uuid_generation ...passed 00:06:26.033 Test: test_retry_io_to_same_path ...passed 00:06:26.033 Test: test_race_between_reset_and_disconnected ...passed 00:06:26.033 Test: test_ctrlr_op_rpc ...passed 00:06:26.033 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:26.033 Test: test_disable_enable_ctrlr ...[2024-06-07 21:02:48.701754] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.033 [2024-06-07 21:02:48.701892] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:26.033 passed 00:06:26.033 Test: test_delete_ctrlr_done ...passed 00:06:26.033 Test: test_ns_remove_during_reset ...passed 00:06:26.033 00:06:26.033 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.033 suites 1 1 n/a 0 0 00:06:26.033 tests 48 48 48 0 0 00:06:26.033 asserts 3553 3553 3553 0 n/a 00:06:26.033 00:06:26.033 Elapsed time = 0.033 seconds 00:06:26.293 21:02:48 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:26.293 Test Options 00:06:26.293 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:26.293 00:06:26.293 00:06:26.293 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.293 http://cunit.sourceforge.net/ 00:06:26.293 00:06:26.293 00:06:26.293 Suite: raid 00:06:26.293 Test: test_create_raid ...passed 00:06:26.293 Test: test_create_raid_superblock ...passed 00:06:26.293 Test: test_delete_raid ...passed 00:06:26.293 Test: test_create_raid_invalid_args ...[2024-06-07 21:02:48.746377] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:26.293 [2024-06-07 21:02:48.746915] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:26.293 [2024-06-07 21:02:48.747423] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:26.293 [2024-06-07 21:02:48.747719] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:26.293 [2024-06-07 21:02:48.748634] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:26.293 passed 00:06:26.293 Test: test_delete_raid_invalid_args ...passed 00:06:26.293 Test: test_io_channel ...passed 00:06:26.293 Test: test_reset_io ...passed 00:06:26.293 Test: test_write_io ...passed 00:06:26.293 Test: test_read_io ...passed 00:06:27.232 Test: test_unmap_io ...passed 00:06:27.232 Test: test_io_failure ...[2024-06-07 21:02:49.591018] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:06:27.232 passed 00:06:27.232 Test: test_multi_raid_no_io ...passed 00:06:27.232 Test: test_multi_raid_with_io ...passed 00:06:27.232 Test: test_io_type_supported ...passed 00:06:27.232 Test: test_raid_json_dump_info ...passed 00:06:27.232 Test: test_context_size ...passed 00:06:27.232 Test: test_raid_level_conversions ...passed 00:06:27.232 Test: test_raid_process ...passed 00:06:27.232 Test: test_raid_io_split ...passed 00:06:27.232 00:06:27.232 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.232 suites 1 1 n/a 0 0 00:06:27.232 tests 19 19 19 0 0 00:06:27.232 asserts 177879 177879 177879 0 n/a 00:06:27.232 00:06:27.232 Elapsed time = 0.856 seconds 00:06:27.232 21:02:49 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:27.232 00:06:27.232 00:06:27.232 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.232 http://cunit.sourceforge.net/ 00:06:27.232 00:06:27.232 00:06:27.232 Suite: raid_sb 00:06:27.232 Test: test_raid_bdev_write_superblock ...passed 00:06:27.232 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:27.232 Test: test_raid_bdev_parse_superblock ...[2024-06-07 21:02:49.633380] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:27.232 passed 00:06:27.232 00:06:27.232 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.232 suites 1 1 n/a 0 0 00:06:27.232 tests 3 3 3 0 0 00:06:27.232 asserts 32 32 32 0 n/a 00:06:27.232 00:06:27.232 Elapsed time = 0.001 seconds 00:06:27.232 21:02:49 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:27.232 00:06:27.232 00:06:27.232 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.232 http://cunit.sourceforge.net/ 00:06:27.232 00:06:27.232 00:06:27.232 Suite: concat 00:06:27.232 Test: test_concat_start ...passed 00:06:27.232 Test: test_concat_rw ...passed 00:06:27.232 Test: test_concat_null_payload ...passed 00:06:27.232 00:06:27.232 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.232 suites 1 1 n/a 0 0 00:06:27.232 tests 3 3 3 0 0 00:06:27.232 asserts 8097 8097 8097 0 n/a 00:06:27.232 00:06:27.232 Elapsed time = 0.007 seconds 00:06:27.232 21:02:49 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:27.232 00:06:27.232 00:06:27.232 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.232 http://cunit.sourceforge.net/ 00:06:27.232 00:06:27.232 00:06:27.232 Suite: raid1 00:06:27.232 Test: test_raid1_start ...passed 00:06:27.232 Test: test_raid1_read_balancing ...passed 00:06:27.232 00:06:27.232 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.232 suites 1 1 n/a 0 0 00:06:27.232 tests 2 2 2 0 0 00:06:27.232 asserts 2856 2856 2856 0 n/a 00:06:27.232 00:06:27.232 Elapsed time = 0.004 seconds 00:06:27.232 21:02:49 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:27.232 00:06:27.232 00:06:27.232 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.232 http://cunit.sourceforge.net/ 00:06:27.232 00:06:27.232 00:06:27.232 Suite: zone 00:06:27.232 Test: test_zone_get_operation ...passed 00:06:27.232 Test: test_bdev_zone_get_info ...passed 00:06:27.232 Test: test_bdev_zone_management ...passed 00:06:27.232 Test: test_bdev_zone_append ...passed 00:06:27.232 Test: test_bdev_zone_append_with_md ...passed 00:06:27.232 Test: test_bdev_zone_appendv ...passed 00:06:27.232 Test: test_bdev_zone_appendv_with_md ...passed 00:06:27.232 Test: test_bdev_io_get_append_location ...passed 00:06:27.232 00:06:27.232 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.232 suites 1 1 n/a 0 0 00:06:27.232 tests 8 8 8 0 0 00:06:27.232 asserts 94 94 94 0 n/a 00:06:27.232 00:06:27.232 Elapsed time = 0.001 seconds 00:06:27.232 21:02:49 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:27.232 00:06:27.232 00:06:27.232 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.232 http://cunit.sourceforge.net/ 00:06:27.232 00:06:27.232 00:06:27.232 Suite: gpt_parse 00:06:27.232 Test: test_parse_mbr_and_primary ...[2024-06-07 21:02:49.777593] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:27.233 [2024-06-07 21:02:49.778028] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:27.233 [2024-06-07 21:02:49.778195] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:27.233 [2024-06-07 21:02:49.778420] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:27.233 [2024-06-07 21:02:49.778592] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:27.233 [2024-06-07 21:02:49.778796] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:27.233 passed 00:06:27.233 Test: test_parse_secondary ...[2024-06-07 21:02:49.779873] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:27.233 [2024-06-07 21:02:49.779969] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:27.233 [2024-06-07 21:02:49.780227] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:27.233 [2024-06-07 21:02:49.780292] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:27.233 passed 00:06:27.233 Test: test_check_mbr ...[2024-06-07 21:02:49.781408] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:27.233 [2024-06-07 21:02:49.781504] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:27.233 passed 00:06:27.233 Test: test_read_header ...[2024-06-07 21:02:49.781788] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:27.233 [2024-06-07 21:02:49.782039] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:27.233 [2024-06-07 21:02:49.782239] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:27.233 [2024-06-07 21:02:49.782402] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:27.233 [2024-06-07 21:02:49.782547] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:27.233 [2024-06-07 21:02:49.782675] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:27.233 passed 00:06:27.233 Test: test_read_partitions ...[2024-06-07 21:02:49.782859] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:27.233 [2024-06-07 21:02:49.783005] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:27.233 [2024-06-07 21:02:49.783163] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:27.233 [2024-06-07 21:02:49.783303] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:27.233 [2024-06-07 21:02:49.783722] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:27.233 passed 00:06:27.233 00:06:27.233 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.233 suites 1 1 n/a 0 0 00:06:27.233 tests 5 5 5 0 0 00:06:27.233 asserts 33 33 33 0 n/a 00:06:27.233 00:06:27.233 Elapsed time = 0.005 seconds 00:06:27.233 21:02:49 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:27.233 00:06:27.233 00:06:27.233 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.233 http://cunit.sourceforge.net/ 00:06:27.233 00:06:27.233 00:06:27.233 Suite: bdev_part 00:06:27.233 Test: part_test ...[2024-06-07 21:02:49.822505] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:06:27.233 passed 00:06:27.233 Test: part_free_test ...passed 00:06:27.233 Test: part_get_io_channel_test ...passed 00:06:27.233 Test: part_construct_ext ...passed 00:06:27.233 00:06:27.233 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.233 suites 1 1 n/a 0 0 00:06:27.233 tests 4 4 4 0 0 00:06:27.233 asserts 48 48 48 0 n/a 00:06:27.233 00:06:27.233 Elapsed time = 0.042 seconds 00:06:27.233 21:02:49 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:27.233 00:06:27.233 00:06:27.233 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.233 http://cunit.sourceforge.net/ 00:06:27.233 00:06:27.233 00:06:27.233 Suite: scsi_nvme_suite 00:06:27.233 Test: scsi_nvme_translate_test ...passed 00:06:27.233 00:06:27.233 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.233 suites 1 1 n/a 0 0 00:06:27.233 tests 1 1 1 0 0 00:06:27.233 asserts 104 104 104 0 n/a 00:06:27.233 00:06:27.233 Elapsed time = 0.000 seconds 00:06:27.492 21:02:49 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:27.492 00:06:27.492 00:06:27.492 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.492 http://cunit.sourceforge.net/ 00:06:27.492 00:06:27.492 00:06:27.492 Suite: lvol 00:06:27.492 Test: ut_lvs_init ...[2024-06-07 21:02:49.935316] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:27.492 [2024-06-07 21:02:49.935718] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:27.492 passed 00:06:27.492 Test: ut_lvol_init ...passed 00:06:27.492 Test: ut_lvol_snapshot ...passed 00:06:27.492 Test: ut_lvol_clone ...passed 00:06:27.492 Test: ut_lvs_destroy ...passed 00:06:27.492 Test: ut_lvs_unload ...passed 00:06:27.492 Test: ut_lvol_resize ...[2024-06-07 21:02:49.937299] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:27.492 passed 00:06:27.492 Test: ut_lvol_set_read_only ...passed 00:06:27.492 Test: ut_lvol_hotremove ...passed 00:06:27.492 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:27.492 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:27.492 Test: ut_lvol_read_write ...passed 00:06:27.492 Test: ut_vbdev_lvol_submit_request ...passed 00:06:27.492 Test: ut_lvol_examine_config ...passed 00:06:27.493 Test: ut_lvol_examine_disk ...[2024-06-07 21:02:49.937996] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:27.493 passed 00:06:27.493 Test: ut_lvol_rename ...[2024-06-07 21:02:49.938979] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:27.493 [2024-06-07 21:02:49.939071] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:27.493 passed 00:06:27.493 Test: ut_bdev_finish ...passed 00:06:27.493 Test: ut_lvs_rename ...passed 00:06:27.493 Test: ut_lvol_seek ...passed 00:06:27.493 Test: ut_esnap_dev_create ...[2024-06-07 21:02:49.939706] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:27.493 [2024-06-07 21:02:49.939771] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:27.493 [2024-06-07 21:02:49.939795] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:27.493 [2024-06-07 21:02:49.939847] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:06:27.493 passed 00:06:27.493 Test: ut_lvol_esnap_clone_bad_args ...[2024-06-07 21:02:49.940009] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:27.493 [2024-06-07 21:02:49.940050] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:06:27.493 passed 00:06:27.493 00:06:27.493 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.493 suites 1 1 n/a 0 0 00:06:27.493 tests 21 21 21 0 0 00:06:27.493 asserts 712 712 712 0 n/a 00:06:27.493 00:06:27.493 Elapsed time = 0.005 seconds 00:06:27.493 21:02:49 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:27.493 00:06:27.493 00:06:27.493 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.493 http://cunit.sourceforge.net/ 00:06:27.493 00:06:27.493 00:06:27.493 Suite: zone_block 00:06:27.493 Test: test_zone_block_create ...passed 00:06:27.493 Test: test_zone_block_create_invalid ...[2024-06-07 21:02:49.990468] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:27.493 [2024-06-07 21:02:49.990773] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-06-07 21:02:49.990913] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:27.493 [2024-06-07 21:02:49.990964] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-06-07 21:02:49.991080] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:27.493 [2024-06-07 21:02:49.991113] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-06-07 21:02:49.991179] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:27.493 [2024-06-07 21:02:49.991224] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:27.493 Test: test_get_zone_info ...[2024-06-07 21:02:49.991628] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:49.991686] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:49.991727] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 passed 00:06:27.493 Test: test_supported_io_types ...passed 00:06:27.493 Test: test_reset_zone ...[2024-06-07 21:02:49.992428] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:49.992487] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 passed 00:06:27.493 Test: test_open_zone ...[2024-06-07 21:02:49.992885] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:49.993540] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:49.993627] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 passed 00:06:27.493 Test: test_zone_write ...[2024-06-07 21:02:49.994017] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:27.493 [2024-06-07 21:02:49.994063] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:49.994111] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:27.493 [2024-06-07 21:02:49.994146] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:49.998511] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:27.493 [2024-06-07 21:02:49.998568] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:49.998639] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:27.493 [2024-06-07 21:02:49.998659] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:50.002864] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:27.493 [2024-06-07 21:02:50.002934] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 passed 00:06:27.493 Test: test_zone_read ...[2024-06-07 21:02:50.003344] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:27.493 [2024-06-07 21:02:50.003389] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:50.003433] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:27.493 [2024-06-07 21:02:50.003458] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:50.003815] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:27.493 [2024-06-07 21:02:50.003862] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 passed 00:06:27.493 Test: test_close_zone ...[2024-06-07 21:02:50.004210] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:50.004284] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:50.004495] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:50.004539] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 passed 00:06:27.493 Test: test_finish_zone ...[2024-06-07 21:02:50.005119] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:50.005172] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 passed 00:06:27.493 Test: test_append_zone ...[2024-06-07 21:02:50.005482] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:27.493 [2024-06-07 21:02:50.005522] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:50.005564] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:27.493 [2024-06-07 21:02:50.005584] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 [2024-06-07 21:02:50.014212] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:27.493 [2024-06-07 21:02:50.014270] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:27.493 passed 00:06:27.493 00:06:27.493 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.493 suites 1 1 n/a 0 0 00:06:27.493 tests 11 11 11 0 0 00:06:27.493 asserts 3437 3437 3437 0 n/a 00:06:27.493 00:06:27.493 Elapsed time = 0.025 seconds 00:06:27.494 21:02:50 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:27.494 00:06:27.494 00:06:27.494 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.494 http://cunit.sourceforge.net/ 00:06:27.494 00:06:27.494 00:06:27.494 Suite: bdev 00:06:27.494 Test: basic ...[2024-06-07 21:02:50.124160] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55a9f563f401): Operation not permitted (rc=-1) 00:06:27.494 [2024-06-07 21:02:50.124602] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55a9f563f3c0): Operation not permitted (rc=-1) 00:06:27.494 [2024-06-07 21:02:50.124656] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55a9f563f401): Operation not permitted (rc=-1) 00:06:27.494 passed 00:06:27.752 Test: unregister_and_close ...passed 00:06:27.753 Test: unregister_and_close_different_threads ...passed 00:06:27.753 Test: basic_qos ...passed 00:06:27.753 Test: put_channel_during_reset ...passed 00:06:28.011 Test: aborted_reset ...passed 00:06:28.011 Test: aborted_reset_no_outstanding_io ...passed 00:06:28.011 Test: io_during_reset ...passed 00:06:28.011 Test: reset_completions ...passed 00:06:28.011 Test: io_during_qos_queue ...passed 00:06:28.270 Test: io_during_qos_reset ...passed 00:06:28.270 Test: enomem ...passed 00:06:28.270 Test: enomem_multi_bdev ...passed 00:06:28.270 Test: enomem_multi_bdev_unregister ...passed 00:06:28.270 Test: enomem_multi_io_target ...passed 00:06:28.270 Test: qos_dynamic_enable ...passed 00:06:28.530 Test: bdev_histograms_mt ...passed 00:06:28.530 Test: bdev_set_io_timeout_mt ...[2024-06-07 21:02:51.039865] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:28.530 passed 00:06:28.530 Test: lock_lba_range_then_submit_io ...[2024-06-07 21:02:51.056703] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x55a9f563f380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:28.530 passed 00:06:28.530 Test: unregister_during_reset ...passed 00:06:28.530 Test: event_notify_and_close ...passed 00:06:28.797 Test: unregister_and_qos_poller ...passed 00:06:28.797 Suite: bdev_wrong_thread 00:06:28.797 Test: spdk_bdev_register_wt ...[2024-06-07 21:02:51.215278] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:06:28.797 passed 00:06:28.797 Test: spdk_bdev_examine_wt ...[2024-06-07 21:02:51.215656] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:06:28.797 passed 00:06:28.797 00:06:28.797 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.797 suites 2 2 n/a 0 0 00:06:28.797 tests 24 24 24 0 0 00:06:28.797 asserts 621 621 621 0 n/a 00:06:28.797 00:06:28.797 Elapsed time = 1.120 seconds 00:06:28.797 00:06:28.797 real 0m4.197s 00:06:28.797 user 0m1.906s 00:06:28.797 sys 0m2.261s 00:06:28.797 ************************************ 00:06:28.797 END TEST unittest_bdev 00:06:28.797 ************************************ 00:06:28.797 21:02:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.797 21:02:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.797 21:02:51 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:28.797 21:02:51 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:28.797 21:02:51 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:28.797 21:02:51 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:28.797 21:02:51 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:28.797 21:02:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:28.797 21:02:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.797 21:02:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.797 ************************************ 00:06:28.797 START TEST unittest_bdev_raid5f 00:06:28.797 ************************************ 00:06:28.797 21:02:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:28.797 00:06:28.797 00:06:28.797 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.797 http://cunit.sourceforge.net/ 00:06:28.797 00:06:28.797 00:06:28.797 Suite: raid5f 00:06:28.797 Test: test_raid5f_start ...passed 00:06:29.422 Test: test_raid5f_submit_read_request ...passed 00:06:29.422 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:32.711 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:47.591 Test: test_raid5f_chunk_write_error ...passed 00:06:55.707 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:06:57.611 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:24.162 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:24.162 00:07:24.162 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.162 suites 1 1 n/a 0 0 00:07:24.162 tests 8 8 8 0 0 00:07:24.162 asserts 351864 351864 351864 0 n/a 00:07:24.162 00:07:24.162 Elapsed time = 53.086 seconds 00:07:24.162 00:07:24.162 real 0m53.186s 00:07:24.162 user 0m50.594s 00:07:24.162 sys 0m2.568s 00:07:24.162 21:03:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.162 ************************************ 00:07:24.162 21:03:44 -- common/autotest_common.sh@10 -- # set +x 00:07:24.162 END TEST unittest_bdev_raid5f 00:07:24.162 ************************************ 00:07:24.162 21:03:44 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:07:24.162 21:03:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:24.162 21:03:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.162 21:03:44 -- common/autotest_common.sh@10 -- # set +x 00:07:24.162 ************************************ 00:07:24.162 START TEST unittest_blob_blobfs 00:07:24.162 ************************************ 00:07:24.162 21:03:44 -- common/autotest_common.sh@1104 -- # unittest_blob 00:07:24.162 21:03:44 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:24.162 21:03:44 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:24.162 00:07:24.162 00:07:24.162 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.162 http://cunit.sourceforge.net/ 00:07:24.162 00:07:24.162 00:07:24.162 Suite: blob_nocopy_noextent 00:07:24.162 Test: blob_init ...[2024-06-07 21:03:44.581603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:24.162 passed 00:07:24.162 Test: blob_thin_provision ...passed 00:07:24.162 Test: blob_read_only ...passed 00:07:24.162 Test: bs_load ...[2024-06-07 21:03:44.684382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:24.162 passed 00:07:24.162 Test: bs_load_custom_cluster_size ...passed 00:07:24.162 Test: bs_load_after_failed_grow ...passed 00:07:24.162 Test: bs_cluster_sz ...[2024-06-07 21:03:44.718342] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:24.162 [2024-06-07 21:03:44.718774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:24.162 [2024-06-07 21:03:44.718921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:24.162 passed 00:07:24.162 Test: bs_resize_md ...passed 00:07:24.162 Test: bs_destroy ...passed 00:07:24.162 Test: bs_type ...passed 00:07:24.162 Test: bs_super_block ...passed 00:07:24.162 Test: bs_test_recover_cluster_count ...passed 00:07:24.162 Test: bs_grow_live ...passed 00:07:24.162 Test: bs_grow_live_no_space ...passed 00:07:24.162 Test: bs_test_grow ...passed 00:07:24.162 Test: blob_serialize_test ...passed 00:07:24.162 Test: super_block_crc ...passed 00:07:24.162 Test: blob_thin_prov_write_count_io ...passed 00:07:24.162 Test: bs_load_iter_test ...passed 00:07:24.162 Test: blob_relations ...[2024-06-07 21:03:44.894926] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:24.162 [2024-06-07 21:03:44.895100] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.162 [2024-06-07 21:03:44.896183] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:24.162 [2024-06-07 21:03:44.896274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.162 passed 00:07:24.162 Test: blob_relations2 ...[2024-06-07 21:03:44.911488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:24.162 [2024-06-07 21:03:44.911593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.162 [2024-06-07 21:03:44.911646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:24.162 [2024-06-07 21:03:44.911677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.162 [2024-06-07 21:03:44.913250] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:24.162 [2024-06-07 21:03:44.913345] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.162 [2024-06-07 21:03:44.913837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:24.162 [2024-06-07 21:03:44.913933] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.162 passed 00:07:24.162 Test: blob_relations3 ...passed 00:07:24.162 Test: blobstore_clean_power_failure ...passed 00:07:24.162 Test: blob_delete_snapshot_power_failure ...[2024-06-07 21:03:45.091083] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:24.162 [2024-06-07 21:03:45.105424] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:24.163 [2024-06-07 21:03:45.105535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:24.163 [2024-06-07 21:03:45.105586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.163 [2024-06-07 21:03:45.120005] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:24.163 [2024-06-07 21:03:45.120127] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:24.163 [2024-06-07 21:03:45.120194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:24.163 [2024-06-07 21:03:45.120246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.163 [2024-06-07 21:03:45.134085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:24.163 [2024-06-07 21:03:45.134251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.163 [2024-06-07 21:03:45.148077] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:24.163 [2024-06-07 21:03:45.148224] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.163 [2024-06-07 21:03:45.162373] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:24.163 [2024-06-07 21:03:45.162502] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.163 passed 00:07:24.163 Test: blob_create_snapshot_power_failure ...[2024-06-07 21:03:45.206022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:24.163 [2024-06-07 21:03:45.233039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:24.163 [2024-06-07 21:03:45.246739] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:24.163 passed 00:07:24.163 Test: blob_io_unit ...passed 00:07:24.163 Test: blob_io_unit_compatibility ...passed 00:07:24.163 Test: blob_ext_md_pages ...passed 00:07:24.163 Test: blob_esnap_io_4096_4096 ...passed 00:07:24.163 Test: blob_esnap_io_512_512 ...passed 00:07:24.163 Test: blob_esnap_io_4096_512 ...passed 00:07:24.163 Test: blob_esnap_io_512_4096 ...passed 00:07:24.163 Suite: blob_bs_nocopy_noextent 00:07:24.163 Test: blob_open ...passed 00:07:24.163 Test: blob_create ...[2024-06-07 21:03:45.512000] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:24.163 passed 00:07:24.163 Test: blob_create_loop ...passed 00:07:24.163 Test: blob_create_fail ...[2024-06-07 21:03:45.617941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:24.163 passed 00:07:24.163 Test: blob_create_internal ...passed 00:07:24.163 Test: blob_create_zero_extent ...passed 00:07:24.163 Test: blob_snapshot ...passed 00:07:24.163 Test: blob_clone ...passed 00:07:24.163 Test: blob_inflate ...[2024-06-07 21:03:45.821713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:24.163 passed 00:07:24.163 Test: blob_delete ...passed 00:07:24.163 Test: blob_resize_test ...[2024-06-07 21:03:45.893263] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:24.163 passed 00:07:24.163 Test: channel_ops ...passed 00:07:24.163 Test: blob_super ...passed 00:07:24.163 Test: blob_rw_verify_iov ...passed 00:07:24.163 Test: blob_unmap ...passed 00:07:24.163 Test: blob_iter ...passed 00:07:24.163 Test: blob_parse_md ...passed 00:07:24.163 Test: bs_load_pending_removal ...passed 00:07:24.163 Test: bs_unload ...[2024-06-07 21:03:46.191014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:24.163 passed 00:07:24.163 Test: bs_usable_clusters ...passed 00:07:24.163 Test: blob_crc ...[2024-06-07 21:03:46.264764] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:24.163 [2024-06-07 21:03:46.265006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:24.163 passed 00:07:24.163 Test: blob_flags ...passed 00:07:24.163 Test: bs_version ...passed 00:07:24.163 Test: blob_set_xattrs_test ...[2024-06-07 21:03:46.373577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:24.163 [2024-06-07 21:03:46.373703] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:24.163 passed 00:07:24.163 Test: blob_thin_prov_alloc ...passed 00:07:24.163 Test: blob_insert_cluster_msg_test ...passed 00:07:24.163 Test: blob_thin_prov_rw ...passed 00:07:24.163 Test: blob_thin_prov_rle ...passed 00:07:24.163 Test: blob_thin_prov_rw_iov ...passed 00:07:24.163 Test: blob_snapshot_rw ...passed 00:07:24.163 Test: blob_snapshot_rw_iov ...passed 00:07:24.422 Test: blob_inflate_rw ...passed 00:07:24.422 Test: blob_snapshot_freeze_io ...passed 00:07:24.681 Test: blob_operation_split_rw ...passed 00:07:24.681 Test: blob_operation_split_rw_iov ...passed 00:07:24.681 Test: blob_simultaneous_operations ...[2024-06-07 21:03:47.340457] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:24.681 [2024-06-07 21:03:47.340602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.681 [2024-06-07 21:03:47.341905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:24.681 [2024-06-07 21:03:47.341975] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.681 [2024-06-07 21:03:47.352730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:24.681 [2024-06-07 21:03:47.352797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.681 [2024-06-07 21:03:47.352952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:24.681 [2024-06-07 21:03:47.352993] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:24.939 passed 00:07:24.939 Test: blob_persist_test ...passed 00:07:24.939 Test: blob_decouple_snapshot ...passed 00:07:24.939 Test: blob_seek_io_unit ...passed 00:07:24.939 Test: blob_nested_freezes ...passed 00:07:24.940 Suite: blob_blob_nocopy_noextent 00:07:24.940 Test: blob_write ...passed 00:07:25.198 Test: blob_read ...passed 00:07:25.198 Test: blob_rw_verify ...passed 00:07:25.198 Test: blob_rw_verify_iov_nomem ...passed 00:07:25.198 Test: blob_rw_iov_read_only ...passed 00:07:25.198 Test: blob_xattr ...passed 00:07:25.198 Test: blob_dirty_shutdown ...passed 00:07:25.198 Test: blob_is_degraded ...passed 00:07:25.198 Suite: blob_esnap_bs_nocopy_noextent 00:07:25.457 Test: blob_esnap_create ...passed 00:07:25.457 Test: blob_esnap_thread_add_remove ...passed 00:07:25.457 Test: blob_esnap_clone_snapshot ...passed 00:07:25.457 Test: blob_esnap_clone_inflate ...passed 00:07:25.457 Test: blob_esnap_clone_decouple ...passed 00:07:25.457 Test: blob_esnap_clone_reload ...passed 00:07:25.457 Test: blob_esnap_hotplug ...passed 00:07:25.457 Suite: blob_nocopy_extent 00:07:25.457 Test: blob_init ...[2024-06-07 21:03:48.109686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:25.457 passed 00:07:25.718 Test: blob_thin_provision ...passed 00:07:25.718 Test: blob_read_only ...passed 00:07:25.718 Test: bs_load ...[2024-06-07 21:03:48.161782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:25.718 passed 00:07:25.718 Test: bs_load_custom_cluster_size ...passed 00:07:25.718 Test: bs_load_after_failed_grow ...passed 00:07:25.718 Test: bs_cluster_sz ...[2024-06-07 21:03:48.190138] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:25.718 [2024-06-07 21:03:48.190418] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:25.718 [2024-06-07 21:03:48.190506] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:25.718 passed 00:07:25.718 Test: bs_resize_md ...passed 00:07:25.718 Test: bs_destroy ...passed 00:07:25.718 Test: bs_type ...passed 00:07:25.718 Test: bs_super_block ...passed 00:07:25.718 Test: bs_test_recover_cluster_count ...passed 00:07:25.718 Test: bs_grow_live ...passed 00:07:25.718 Test: bs_grow_live_no_space ...passed 00:07:25.718 Test: bs_test_grow ...passed 00:07:25.718 Test: blob_serialize_test ...passed 00:07:25.718 Test: super_block_crc ...passed 00:07:25.718 Test: blob_thin_prov_write_count_io ...passed 00:07:25.718 Test: bs_load_iter_test ...passed 00:07:25.718 Test: blob_relations ...[2024-06-07 21:03:48.355809] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.718 [2024-06-07 21:03:48.355950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.718 [2024-06-07 21:03:48.357014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.718 [2024-06-07 21:03:48.357096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.718 passed 00:07:25.718 Test: blob_relations2 ...[2024-06-07 21:03:48.371894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.718 [2024-06-07 21:03:48.372013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.718 [2024-06-07 21:03:48.372049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.718 [2024-06-07 21:03:48.372090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.718 [2024-06-07 21:03:48.373594] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.718 [2024-06-07 21:03:48.373670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.718 [2024-06-07 21:03:48.374097] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.718 [2024-06-07 21:03:48.374162] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.718 passed 00:07:25.718 Test: blob_relations3 ...passed 00:07:25.979 Test: blobstore_clean_power_failure ...passed 00:07:25.979 Test: blob_delete_snapshot_power_failure ...[2024-06-07 21:03:48.546291] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:25.979 [2024-06-07 21:03:48.559854] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:25.979 [2024-06-07 21:03:48.573479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:25.979 [2024-06-07 21:03:48.573590] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:25.979 [2024-06-07 21:03:48.573629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.979 [2024-06-07 21:03:48.587936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:25.979 [2024-06-07 21:03:48.588042] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:25.979 [2024-06-07 21:03:48.588078] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:25.979 [2024-06-07 21:03:48.588118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.979 [2024-06-07 21:03:48.601561] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:25.979 [2024-06-07 21:03:48.601667] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:25.979 [2024-06-07 21:03:48.601695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:25.979 [2024-06-07 21:03:48.601741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.979 [2024-06-07 21:03:48.615535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:25.979 [2024-06-07 21:03:48.615674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.979 [2024-06-07 21:03:48.629826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:25.979 [2024-06-07 21:03:48.629965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.979 [2024-06-07 21:03:48.643372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:25.979 [2024-06-07 21:03:48.643524] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:26.238 passed 00:07:26.238 Test: blob_create_snapshot_power_failure ...[2024-06-07 21:03:48.686978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:26.238 [2024-06-07 21:03:48.700157] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:26.238 [2024-06-07 21:03:48.726685] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:26.238 [2024-06-07 21:03:48.740573] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:26.238 passed 00:07:26.238 Test: blob_io_unit ...passed 00:07:26.238 Test: blob_io_unit_compatibility ...passed 00:07:26.238 Test: blob_ext_md_pages ...passed 00:07:26.238 Test: blob_esnap_io_4096_4096 ...passed 00:07:26.238 Test: blob_esnap_io_512_512 ...passed 00:07:26.497 Test: blob_esnap_io_4096_512 ...passed 00:07:26.497 Test: blob_esnap_io_512_4096 ...passed 00:07:26.497 Suite: blob_bs_nocopy_extent 00:07:26.497 Test: blob_open ...passed 00:07:26.497 Test: blob_create ...[2024-06-07 21:03:49.003510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:26.497 passed 00:07:26.497 Test: blob_create_loop ...passed 00:07:26.497 Test: blob_create_fail ...[2024-06-07 21:03:49.112947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:26.497 passed 00:07:26.497 Test: blob_create_internal ...passed 00:07:26.756 Test: blob_create_zero_extent ...passed 00:07:26.756 Test: blob_snapshot ...passed 00:07:26.756 Test: blob_clone ...passed 00:07:26.756 Test: blob_inflate ...[2024-06-07 21:03:49.305846] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:26.756 passed 00:07:26.756 Test: blob_delete ...passed 00:07:26.756 Test: blob_resize_test ...[2024-06-07 21:03:49.369706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:26.756 passed 00:07:26.756 Test: channel_ops ...passed 00:07:27.015 Test: blob_super ...passed 00:07:27.015 Test: blob_rw_verify_iov ...passed 00:07:27.015 Test: blob_unmap ...passed 00:07:27.015 Test: blob_iter ...passed 00:07:27.015 Test: blob_parse_md ...passed 00:07:27.015 Test: bs_load_pending_removal ...passed 00:07:27.015 Test: bs_unload ...[2024-06-07 21:03:49.655938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:27.015 passed 00:07:27.273 Test: bs_usable_clusters ...passed 00:07:27.273 Test: blob_crc ...[2024-06-07 21:03:49.724356] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:27.273 [2024-06-07 21:03:49.724496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:27.273 passed 00:07:27.274 Test: blob_flags ...passed 00:07:27.274 Test: bs_version ...passed 00:07:27.274 Test: blob_set_xattrs_test ...[2024-06-07 21:03:49.835825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:27.274 [2024-06-07 21:03:49.835994] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:27.274 passed 00:07:27.532 Test: blob_thin_prov_alloc ...passed 00:07:27.532 Test: blob_insert_cluster_msg_test ...passed 00:07:27.532 Test: blob_thin_prov_rw ...passed 00:07:27.532 Test: blob_thin_prov_rle ...passed 00:07:27.532 Test: blob_thin_prov_rw_iov ...passed 00:07:27.532 Test: blob_snapshot_rw ...passed 00:07:27.532 Test: blob_snapshot_rw_iov ...passed 00:07:27.791 Test: blob_inflate_rw ...passed 00:07:27.792 Test: blob_snapshot_freeze_io ...passed 00:07:28.050 Test: blob_operation_split_rw ...passed 00:07:28.310 Test: blob_operation_split_rw_iov ...passed 00:07:28.310 Test: blob_simultaneous_operations ...[2024-06-07 21:03:50.784489] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:28.310 [2024-06-07 21:03:50.784658] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:28.310 [2024-06-07 21:03:50.785895] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:28.310 [2024-06-07 21:03:50.785957] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:28.310 [2024-06-07 21:03:50.796201] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:28.310 [2024-06-07 21:03:50.796282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:28.310 [2024-06-07 21:03:50.796393] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:28.310 [2024-06-07 21:03:50.796422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:28.310 passed 00:07:28.310 Test: blob_persist_test ...passed 00:07:28.310 Test: blob_decouple_snapshot ...passed 00:07:28.310 Test: blob_seek_io_unit ...passed 00:07:28.310 Test: blob_nested_freezes ...passed 00:07:28.310 Suite: blob_blob_nocopy_extent 00:07:28.569 Test: blob_write ...passed 00:07:28.569 Test: blob_read ...passed 00:07:28.569 Test: blob_rw_verify ...passed 00:07:28.569 Test: blob_rw_verify_iov_nomem ...passed 00:07:28.569 Test: blob_rw_iov_read_only ...passed 00:07:28.569 Test: blob_xattr ...passed 00:07:28.569 Test: blob_dirty_shutdown ...passed 00:07:28.828 Test: blob_is_degraded ...passed 00:07:28.828 Suite: blob_esnap_bs_nocopy_extent 00:07:28.828 Test: blob_esnap_create ...passed 00:07:28.828 Test: blob_esnap_thread_add_remove ...passed 00:07:28.828 Test: blob_esnap_clone_snapshot ...passed 00:07:28.828 Test: blob_esnap_clone_inflate ...passed 00:07:28.828 Test: blob_esnap_clone_decouple ...passed 00:07:28.828 Test: blob_esnap_clone_reload ...passed 00:07:29.087 Test: blob_esnap_hotplug ...passed 00:07:29.087 Suite: blob_copy_noextent 00:07:29.087 Test: blob_init ...[2024-06-07 21:03:51.505215] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:29.087 passed 00:07:29.087 Test: blob_thin_provision ...passed 00:07:29.087 Test: blob_read_only ...passed 00:07:29.087 Test: bs_load ...[2024-06-07 21:03:51.551284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:29.087 passed 00:07:29.087 Test: bs_load_custom_cluster_size ...passed 00:07:29.087 Test: bs_load_after_failed_grow ...passed 00:07:29.087 Test: bs_cluster_sz ...[2024-06-07 21:03:51.579740] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:29.087 [2024-06-07 21:03:51.579987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:29.087 [2024-06-07 21:03:51.580069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:29.087 passed 00:07:29.087 Test: bs_resize_md ...passed 00:07:29.087 Test: bs_destroy ...passed 00:07:29.087 Test: bs_type ...passed 00:07:29.087 Test: bs_super_block ...passed 00:07:29.087 Test: bs_test_recover_cluster_count ...passed 00:07:29.087 Test: bs_grow_live ...passed 00:07:29.087 Test: bs_grow_live_no_space ...passed 00:07:29.087 Test: bs_test_grow ...passed 00:07:29.087 Test: blob_serialize_test ...passed 00:07:29.087 Test: super_block_crc ...passed 00:07:29.087 Test: blob_thin_prov_write_count_io ...passed 00:07:29.087 Test: bs_load_iter_test ...passed 00:07:29.087 Test: blob_relations ...[2024-06-07 21:03:51.738525] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:29.087 [2024-06-07 21:03:51.738651] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.087 [2024-06-07 21:03:51.739304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:29.087 [2024-06-07 21:03:51.739396] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.087 passed 00:07:29.087 Test: blob_relations2 ...[2024-06-07 21:03:51.754851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:29.087 [2024-06-07 21:03:51.754937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.087 [2024-06-07 21:03:51.754964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:29.087 [2024-06-07 21:03:51.754980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.087 [2024-06-07 21:03:51.755982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:29.087 [2024-06-07 21:03:51.756053] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.087 [2024-06-07 21:03:51.756383] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:29.087 [2024-06-07 21:03:51.756432] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.088 passed 00:07:29.346 Test: blob_relations3 ...passed 00:07:29.346 Test: blobstore_clean_power_failure ...passed 00:07:29.346 Test: blob_delete_snapshot_power_failure ...[2024-06-07 21:03:51.927306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:29.346 [2024-06-07 21:03:51.939793] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:29.346 [2024-06-07 21:03:51.939902] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:29.346 [2024-06-07 21:03:51.939931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.346 [2024-06-07 21:03:51.952751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:29.346 [2024-06-07 21:03:51.952853] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:29.346 [2024-06-07 21:03:51.952923] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:29.346 [2024-06-07 21:03:51.952948] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.346 [2024-06-07 21:03:51.966080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:29.346 [2024-06-07 21:03:51.966205] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.346 [2024-06-07 21:03:51.979650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:29.346 [2024-06-07 21:03:51.979768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.346 [2024-06-07 21:03:51.992569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:29.346 [2024-06-07 21:03:51.992691] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.346 passed 00:07:29.668 Test: blob_create_snapshot_power_failure ...[2024-06-07 21:03:52.031378] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:29.668 [2024-06-07 21:03:52.055462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:29.668 [2024-06-07 21:03:52.068421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:29.668 passed 00:07:29.668 Test: blob_io_unit ...passed 00:07:29.668 Test: blob_io_unit_compatibility ...passed 00:07:29.668 Test: blob_ext_md_pages ...passed 00:07:29.668 Test: blob_esnap_io_4096_4096 ...passed 00:07:29.668 Test: blob_esnap_io_512_512 ...passed 00:07:29.668 Test: blob_esnap_io_4096_512 ...passed 00:07:29.668 Test: blob_esnap_io_512_4096 ...passed 00:07:29.668 Suite: blob_bs_copy_noextent 00:07:29.668 Test: blob_open ...passed 00:07:29.668 Test: blob_create ...[2024-06-07 21:03:52.325158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:29.668 passed 00:07:29.927 Test: blob_create_loop ...passed 00:07:29.927 Test: blob_create_fail ...[2024-06-07 21:03:52.422050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:29.927 passed 00:07:29.927 Test: blob_create_internal ...passed 00:07:29.927 Test: blob_create_zero_extent ...passed 00:07:29.927 Test: blob_snapshot ...passed 00:07:29.927 Test: blob_clone ...passed 00:07:29.927 Test: blob_inflate ...[2024-06-07 21:03:52.597917] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:30.186 passed 00:07:30.186 Test: blob_delete ...passed 00:07:30.186 Test: blob_resize_test ...[2024-06-07 21:03:52.666747] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:30.186 passed 00:07:30.186 Test: channel_ops ...passed 00:07:30.186 Test: blob_super ...passed 00:07:30.186 Test: blob_rw_verify_iov ...passed 00:07:30.186 Test: blob_unmap ...passed 00:07:30.186 Test: blob_iter ...passed 00:07:30.445 Test: blob_parse_md ...passed 00:07:30.445 Test: bs_load_pending_removal ...passed 00:07:30.445 Test: bs_unload ...[2024-06-07 21:03:52.949535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:30.445 passed 00:07:30.445 Test: bs_usable_clusters ...passed 00:07:30.445 Test: blob_crc ...[2024-06-07 21:03:53.019695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:30.445 [2024-06-07 21:03:53.019832] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:30.445 passed 00:07:30.445 Test: blob_flags ...passed 00:07:30.445 Test: bs_version ...passed 00:07:30.445 Test: blob_set_xattrs_test ...[2024-06-07 21:03:53.118145] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:30.445 [2024-06-07 21:03:53.118269] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:30.704 passed 00:07:30.704 Test: blob_thin_prov_alloc ...passed 00:07:30.704 Test: blob_insert_cluster_msg_test ...passed 00:07:30.704 Test: blob_thin_prov_rw ...passed 00:07:30.975 Test: blob_thin_prov_rle ...passed 00:07:30.975 Test: blob_thin_prov_rw_iov ...passed 00:07:30.975 Test: blob_snapshot_rw ...passed 00:07:30.975 Test: blob_snapshot_rw_iov ...passed 00:07:31.234 Test: blob_inflate_rw ...passed 00:07:31.234 Test: blob_snapshot_freeze_io ...passed 00:07:31.234 Test: blob_operation_split_rw ...passed 00:07:31.493 Test: blob_operation_split_rw_iov ...passed 00:07:31.493 Test: blob_simultaneous_operations ...[2024-06-07 21:03:54.064983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:31.493 [2024-06-07 21:03:54.065125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:31.493 [2024-06-07 21:03:54.065625] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:31.493 [2024-06-07 21:03:54.065667] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:31.493 [2024-06-07 21:03:54.068323] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:31.493 [2024-06-07 21:03:54.068382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:31.493 [2024-06-07 21:03:54.068465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:31.493 [2024-06-07 21:03:54.068486] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:31.493 passed 00:07:31.493 Test: blob_persist_test ...passed 00:07:31.493 Test: blob_decouple_snapshot ...passed 00:07:31.752 Test: blob_seek_io_unit ...passed 00:07:31.752 Test: blob_nested_freezes ...passed 00:07:31.752 Suite: blob_blob_copy_noextent 00:07:31.752 Test: blob_write ...passed 00:07:31.752 Test: blob_read ...passed 00:07:31.752 Test: blob_rw_verify ...passed 00:07:31.752 Test: blob_rw_verify_iov_nomem ...passed 00:07:31.752 Test: blob_rw_iov_read_only ...passed 00:07:32.010 Test: blob_xattr ...passed 00:07:32.010 Test: blob_dirty_shutdown ...passed 00:07:32.010 Test: blob_is_degraded ...passed 00:07:32.010 Suite: blob_esnap_bs_copy_noextent 00:07:32.010 Test: blob_esnap_create ...passed 00:07:32.010 Test: blob_esnap_thread_add_remove ...passed 00:07:32.010 Test: blob_esnap_clone_snapshot ...passed 00:07:32.010 Test: blob_esnap_clone_inflate ...passed 00:07:32.268 Test: blob_esnap_clone_decouple ...passed 00:07:32.268 Test: blob_esnap_clone_reload ...passed 00:07:32.268 Test: blob_esnap_hotplug ...passed 00:07:32.268 Suite: blob_copy_extent 00:07:32.268 Test: blob_init ...[2024-06-07 21:03:54.759041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:32.268 passed 00:07:32.268 Test: blob_thin_provision ...passed 00:07:32.268 Test: blob_read_only ...passed 00:07:32.268 Test: bs_load ...[2024-06-07 21:03:54.804834] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:32.268 passed 00:07:32.268 Test: bs_load_custom_cluster_size ...passed 00:07:32.268 Test: bs_load_after_failed_grow ...passed 00:07:32.268 Test: bs_cluster_sz ...[2024-06-07 21:03:54.830784] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:32.268 [2024-06-07 21:03:54.831012] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:32.269 [2024-06-07 21:03:54.831070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:32.269 passed 00:07:32.269 Test: bs_resize_md ...passed 00:07:32.269 Test: bs_destroy ...passed 00:07:32.269 Test: bs_type ...passed 00:07:32.269 Test: bs_super_block ...passed 00:07:32.269 Test: bs_test_recover_cluster_count ...passed 00:07:32.269 Test: bs_grow_live ...passed 00:07:32.269 Test: bs_grow_live_no_space ...passed 00:07:32.269 Test: bs_test_grow ...passed 00:07:32.269 Test: blob_serialize_test ...passed 00:07:32.527 Test: super_block_crc ...passed 00:07:32.527 Test: blob_thin_prov_write_count_io ...passed 00:07:32.527 Test: bs_load_iter_test ...passed 00:07:32.527 Test: blob_relations ...[2024-06-07 21:03:54.993282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:32.527 [2024-06-07 21:03:54.993395] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.527 [2024-06-07 21:03:54.994305] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:32.527 [2024-06-07 21:03:54.994362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.527 passed 00:07:32.527 Test: blob_relations2 ...[2024-06-07 21:03:55.008367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:32.527 [2024-06-07 21:03:55.008453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.527 [2024-06-07 21:03:55.008495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:32.527 [2024-06-07 21:03:55.008519] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.527 [2024-06-07 21:03:55.009904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:32.527 [2024-06-07 21:03:55.009959] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.527 [2024-06-07 21:03:55.010384] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:32.527 [2024-06-07 21:03:55.010442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.527 passed 00:07:32.527 Test: blob_relations3 ...passed 00:07:32.527 Test: blobstore_clean_power_failure ...passed 00:07:32.527 Test: blob_delete_snapshot_power_failure ...[2024-06-07 21:03:55.181670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:32.527 [2024-06-07 21:03:55.195527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:32.785 [2024-06-07 21:03:55.210172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:32.785 [2024-06-07 21:03:55.210295] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:32.785 [2024-06-07 21:03:55.210326] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.786 [2024-06-07 21:03:55.226758] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:32.786 [2024-06-07 21:03:55.226860] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:32.786 [2024-06-07 21:03:55.226884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:32.786 [2024-06-07 21:03:55.226907] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.786 [2024-06-07 21:03:55.239992] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:32.786 [2024-06-07 21:03:55.240085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:32.786 [2024-06-07 21:03:55.240108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:32.786 [2024-06-07 21:03:55.240132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.786 [2024-06-07 21:03:55.253399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:32.786 [2024-06-07 21:03:55.253514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.786 [2024-06-07 21:03:55.266398] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:32.786 [2024-06-07 21:03:55.266511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.786 [2024-06-07 21:03:55.279287] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:32.786 [2024-06-07 21:03:55.279386] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.786 passed 00:07:32.786 Test: blob_create_snapshot_power_failure ...[2024-06-07 21:03:55.318664] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:32.786 [2024-06-07 21:03:55.331009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:32.786 [2024-06-07 21:03:55.358072] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:32.786 [2024-06-07 21:03:55.372552] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:32.786 passed 00:07:32.786 Test: blob_io_unit ...passed 00:07:32.786 Test: blob_io_unit_compatibility ...passed 00:07:32.786 Test: blob_ext_md_pages ...passed 00:07:33.044 Test: blob_esnap_io_4096_4096 ...passed 00:07:33.044 Test: blob_esnap_io_512_512 ...passed 00:07:33.044 Test: blob_esnap_io_4096_512 ...passed 00:07:33.044 Test: blob_esnap_io_512_4096 ...passed 00:07:33.044 Suite: blob_bs_copy_extent 00:07:33.044 Test: blob_open ...passed 00:07:33.044 Test: blob_create ...[2024-06-07 21:03:55.617439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:33.044 passed 00:07:33.044 Test: blob_create_loop ...passed 00:07:33.044 Test: blob_create_fail ...[2024-06-07 21:03:55.716096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:33.303 passed 00:07:33.303 Test: blob_create_internal ...passed 00:07:33.303 Test: blob_create_zero_extent ...passed 00:07:33.303 Test: blob_snapshot ...passed 00:07:33.303 Test: blob_clone ...passed 00:07:33.303 Test: blob_inflate ...[2024-06-07 21:03:55.899809] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:33.303 passed 00:07:33.303 Test: blob_delete ...passed 00:07:33.303 Test: blob_resize_test ...[2024-06-07 21:03:55.962680] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:33.303 passed 00:07:33.561 Test: channel_ops ...passed 00:07:33.561 Test: blob_super ...passed 00:07:33.561 Test: blob_rw_verify_iov ...passed 00:07:33.561 Test: blob_unmap ...passed 00:07:33.561 Test: blob_iter ...passed 00:07:33.561 Test: blob_parse_md ...passed 00:07:33.561 Test: bs_load_pending_removal ...passed 00:07:33.820 Test: bs_unload ...[2024-06-07 21:03:56.239159] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:33.820 passed 00:07:33.820 Test: bs_usable_clusters ...passed 00:07:33.820 Test: blob_crc ...[2024-06-07 21:03:56.304960] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:33.820 [2024-06-07 21:03:56.305108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:33.820 passed 00:07:33.820 Test: blob_flags ...passed 00:07:33.820 Test: bs_version ...passed 00:07:33.820 Test: blob_set_xattrs_test ...[2024-06-07 21:03:56.420338] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:33.820 [2024-06-07 21:03:56.420470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:33.820 passed 00:07:34.079 Test: blob_thin_prov_alloc ...passed 00:07:34.079 Test: blob_insert_cluster_msg_test ...passed 00:07:34.079 Test: blob_thin_prov_rw ...passed 00:07:34.079 Test: blob_thin_prov_rle ...passed 00:07:34.079 Test: blob_thin_prov_rw_iov ...passed 00:07:34.079 Test: blob_snapshot_rw ...passed 00:07:34.337 Test: blob_snapshot_rw_iov ...passed 00:07:34.337 Test: blob_inflate_rw ...passed 00:07:34.595 Test: blob_snapshot_freeze_io ...passed 00:07:34.595 Test: blob_operation_split_rw ...passed 00:07:34.853 Test: blob_operation_split_rw_iov ...passed 00:07:34.854 Test: blob_simultaneous_operations ...[2024-06-07 21:03:57.334713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:34.854 [2024-06-07 21:03:57.334830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:34.854 [2024-06-07 21:03:57.335281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:34.854 [2024-06-07 21:03:57.335320] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:34.854 [2024-06-07 21:03:57.337781] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:34.854 [2024-06-07 21:03:57.337840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:34.854 [2024-06-07 21:03:57.337931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:34.854 [2024-06-07 21:03:57.337958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:34.854 passed 00:07:34.854 Test: blob_persist_test ...passed 00:07:34.854 Test: blob_decouple_snapshot ...passed 00:07:34.854 Test: blob_seek_io_unit ...passed 00:07:34.854 Test: blob_nested_freezes ...passed 00:07:34.854 Suite: blob_blob_copy_extent 00:07:35.112 Test: blob_write ...passed 00:07:35.112 Test: blob_read ...passed 00:07:35.112 Test: blob_rw_verify ...passed 00:07:35.112 Test: blob_rw_verify_iov_nomem ...passed 00:07:35.112 Test: blob_rw_iov_read_only ...passed 00:07:35.112 Test: blob_xattr ...passed 00:07:35.112 Test: blob_dirty_shutdown ...passed 00:07:35.371 Test: blob_is_degraded ...passed 00:07:35.371 Suite: blob_esnap_bs_copy_extent 00:07:35.371 Test: blob_esnap_create ...passed 00:07:35.371 Test: blob_esnap_thread_add_remove ...passed 00:07:35.371 Test: blob_esnap_clone_snapshot ...passed 00:07:35.371 Test: blob_esnap_clone_inflate ...passed 00:07:35.371 Test: blob_esnap_clone_decouple ...passed 00:07:35.371 Test: blob_esnap_clone_reload ...passed 00:07:35.630 Test: blob_esnap_hotplug ...passed 00:07:35.630 00:07:35.630 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.630 suites 16 16 n/a 0 0 00:07:35.630 tests 348 348 348 0 0 00:07:35.630 asserts 92605 92605 92605 0 n/a 00:07:35.630 00:07:35.630 Elapsed time = 13.473 seconds 00:07:35.630 21:03:58 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:07:35.630 00:07:35.630 00:07:35.630 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.630 http://cunit.sourceforge.net/ 00:07:35.630 00:07:35.630 00:07:35.630 Suite: blob_bdev 00:07:35.630 Test: create_bs_dev ...passed 00:07:35.630 Test: create_bs_dev_ro ...[2024-06-07 21:03:58.148945] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:07:35.630 passed 00:07:35.630 Test: create_bs_dev_rw ...passed 00:07:35.630 Test: claim_bs_dev ...[2024-06-07 21:03:58.149536] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:07:35.630 passed 00:07:35.630 Test: claim_bs_dev_ro ...passed 00:07:35.630 Test: deferred_destroy_refs ...passed 00:07:35.630 Test: deferred_destroy_channels ...passed 00:07:35.630 Test: deferred_destroy_threads ...passed 00:07:35.630 00:07:35.630 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.630 suites 1 1 n/a 0 0 00:07:35.630 tests 8 8 8 0 0 00:07:35.630 asserts 119 119 119 0 n/a 00:07:35.630 00:07:35.630 Elapsed time = 0.001 seconds 00:07:35.630 21:03:58 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:07:35.630 00:07:35.630 00:07:35.630 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.630 http://cunit.sourceforge.net/ 00:07:35.630 00:07:35.630 00:07:35.630 Suite: tree 00:07:35.630 Test: blobfs_tree_op_test ...passed 00:07:35.630 00:07:35.630 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.630 suites 1 1 n/a 0 0 00:07:35.630 tests 1 1 1 0 0 00:07:35.630 asserts 27 27 27 0 n/a 00:07:35.630 00:07:35.630 Elapsed time = 0.000 seconds 00:07:35.630 21:03:58 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:07:35.630 00:07:35.630 00:07:35.630 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.630 http://cunit.sourceforge.net/ 00:07:35.630 00:07:35.630 00:07:35.630 Suite: blobfs_async_ut 00:07:35.630 Test: fs_init ...passed 00:07:35.630 Test: fs_open ...passed 00:07:35.889 Test: fs_create ...passed 00:07:35.889 Test: fs_truncate ...passed 00:07:35.889 Test: fs_rename ...[2024-06-07 21:03:58.357854] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:07:35.889 passed 00:07:35.889 Test: fs_rw_async ...passed 00:07:35.889 Test: fs_writev_readv_async ...passed 00:07:35.889 Test: tree_find_buffer_ut ...passed 00:07:35.889 Test: channel_ops ...passed 00:07:35.889 Test: channel_ops_sync ...passed 00:07:35.889 00:07:35.889 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.889 suites 1 1 n/a 0 0 00:07:35.889 tests 10 10 10 0 0 00:07:35.889 asserts 292 292 292 0 n/a 00:07:35.889 00:07:35.889 Elapsed time = 0.235 seconds 00:07:35.889 21:03:58 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:07:35.889 00:07:35.889 00:07:35.889 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.889 http://cunit.sourceforge.net/ 00:07:35.889 00:07:35.889 00:07:35.889 Suite: blobfs_sync_ut 00:07:36.148 Test: cache_read_after_write ...[2024-06-07 21:03:58.584459] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:07:36.148 passed 00:07:36.148 Test: file_length ...passed 00:07:36.148 Test: append_write_to_extend_blob ...passed 00:07:36.148 Test: partial_buffer ...passed 00:07:36.148 Test: cache_write_null_buffer ...passed 00:07:36.148 Test: fs_create_sync ...passed 00:07:36.148 Test: fs_rename_sync ...passed 00:07:36.148 Test: cache_append_no_cache ...passed 00:07:36.148 Test: fs_delete_file_without_close ...passed 00:07:36.148 00:07:36.148 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.148 suites 1 1 n/a 0 0 00:07:36.148 tests 9 9 9 0 0 00:07:36.148 asserts 345 345 345 0 n/a 00:07:36.148 00:07:36.148 Elapsed time = 0.532 seconds 00:07:36.408 21:03:58 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:07:36.408 00:07:36.408 00:07:36.408 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.408 http://cunit.sourceforge.net/ 00:07:36.408 00:07:36.408 00:07:36.408 Suite: blobfs_bdev_ut 00:07:36.408 Test: spdk_blobfs_bdev_detect_test ...[2024-06-07 21:03:58.846548] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:36.408 passed 00:07:36.408 Test: spdk_blobfs_bdev_create_test ...passed 00:07:36.408 Test: spdk_blobfs_bdev_mount_test ...passed 00:07:36.408 00:07:36.408 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.408 suites 1 1 n/a 0 0 00:07:36.408 tests 3 3 3 0 0 00:07:36.408 asserts 9 9 9 0 n/a 00:07:36.408 00:07:36.408 Elapsed time = 0.000 seconds 00:07:36.408 [2024-06-07 21:03:58.846907] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:36.408 00:07:36.408 real 0m14.310s 00:07:36.408 user 0m13.886s 00:07:36.408 sys 0m0.705s 00:07:36.408 21:03:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.408 21:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:36.408 ************************************ 00:07:36.408 END TEST unittest_blob_blobfs 00:07:36.408 ************************************ 00:07:36.408 21:03:58 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:07:36.408 21:03:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.408 21:03:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.408 21:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:36.408 ************************************ 00:07:36.408 START TEST unittest_event 00:07:36.408 ************************************ 00:07:36.408 21:03:58 -- common/autotest_common.sh@1104 -- # unittest_event 00:07:36.408 21:03:58 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:07:36.408 00:07:36.408 00:07:36.408 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.408 http://cunit.sourceforge.net/ 00:07:36.408 00:07:36.408 00:07:36.408 Suite: app_suite 00:07:36.408 Test: test_spdk_app_parse_args ...app_ut [options] 00:07:36.408 options: 00:07:36.408 -c, --config JSON config file (default none) 00:07:36.408 --json JSON config file (default none) 00:07:36.408 --json-ignore-init-errors 00:07:36.408 don't exit on invalid config entry 00:07:36.408 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:36.408 -g, --single-file-segments 00:07:36.408 force creating just one hugetlbfs file 00:07:36.408 -h, --help show this usage 00:07:36.408 -i, --shm-id shared memory ID (optional) 00:07:36.408 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:36.408 --lcores lcore to CPU mapping list. The list is in the format: 00:07:36.408 [<,lcores[@CPUs]>...] 00:07:36.408 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:36.408 Within the group, '-' is used for range separator, 00:07:36.408 ',' is used for single number separator. 00:07:36.408 '( )' can be omitted for single element group, 00:07:36.408 '@' can be omitted if cpus and lcores have the same value 00:07:36.408 -n, --mem-channels channel number of memory channels used for DPDK 00:07:36.408 -p, --main-core main (primary) core for DPDK 00:07:36.408 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:36.408 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:36.408 app_ut: invalid option -- 'z' 00:07:36.408 --disable-cpumask-locks Disable CPU core lock files. 00:07:36.408 --silence-noticelog disable notice level logging to stderr 00:07:36.408 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:36.408 -u, --no-pci disable PCI access 00:07:36.408 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:36.408 --max-delay maximum reactor delay (in microseconds) 00:07:36.408 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:36.408 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:36.408 -R, --huge-unlink unlink huge files after initialization 00:07:36.408 -v, --version print SPDK version 00:07:36.408 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:36.408 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:36.408 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:36.408 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:36.408 Tracepoints vary in size and can use more than one trace entry. 00:07:36.408 --rpcs-allowed comma-separated list of permitted RPCS 00:07:36.409 --env-context Opaque context for use of the env implementation 00:07:36.409 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:36.409 --no-huge run without using hugepages 00:07:36.409 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:36.409 -e, --tpoint-group [:] 00:07:36.409 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:36.409 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:36.409 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:36.409 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:36.409 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:36.409 app_ut [options] 00:07:36.409 options: 00:07:36.409 -c, --config JSON config file (default none) 00:07:36.409 --json JSON config file (default none) 00:07:36.409 --json-ignore-init-errors 00:07:36.409 don't exit on invalid config entry 00:07:36.409 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:36.409 -g, --single-file-segments 00:07:36.409 force creating just one hugetlbfs file 00:07:36.409 -h, --help show this usage 00:07:36.409 -i, --shm-id shared memory ID (optional) 00:07:36.409 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:36.409 --lcores lcore to CPU mapping list. The list is in the format: 00:07:36.409 [<,lcores[@CPUs]>...] 00:07:36.409 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:36.409 Within the group, '-' is used for range separator, 00:07:36.409 ',' is used for single number separator. 00:07:36.409 '( )' can be omitted for single element group, 00:07:36.409 '@' can be omitted if cpus and lcores have the same value 00:07:36.409 -n, --mem-channels channel number of memory channels used for DPDK 00:07:36.409 -p, --main-core main (primary) core for DPDK 00:07:36.409 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:36.409 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:36.409 --disable-cpumask-locks Disable CPU core lock files. 00:07:36.409 --silence-noticelog disable notice level logging to stderr 00:07:36.409 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:36.409 -u, --no-pci disable PCI access 00:07:36.409 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:36.409 --max-delay maximum reactor delay (in microseconds) 00:07:36.409 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:36.409 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:36.409 -R, --huge-unlink unlink huge files after initialization 00:07:36.409 -v, --version print SPDK version 00:07:36.409 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:36.409 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:36.409 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:36.409 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:36.409 Tracepoints vary in size and can use more than one trace entry. 00:07:36.409 --rpcs-allowed comma-separated list of permitted RPCS 00:07:36.409 --env-context Opaque context for use of the env implementation 00:07:36.409 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:36.409 --no-huge run without using hugepages 00:07:36.409 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:36.409 -e, --tpoint-group [:] 00:07:36.409 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:36.409 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:36.409 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:36.409 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:36.409 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:36.409 app_ut: unrecognized option '--test-long-opt' 00:07:36.409 [2024-06-07 21:03:58.929723] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:07:36.409 [2024-06-07 21:03:58.930068] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:07:36.409 app_ut [options] 00:07:36.409 options: 00:07:36.409 -c, --config JSON config file (default none) 00:07:36.409 --json JSON config file (default none) 00:07:36.409 --json-ignore-init-errors 00:07:36.409 don't exit on invalid config entry 00:07:36.409 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:36.409 -g, --single-file-segments 00:07:36.409 force creating just one hugetlbfs file 00:07:36.409 -h, --help show this usage 00:07:36.409 -i, --shm-id shared memory ID (optional) 00:07:36.409 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:36.409 --lcores lcore to CPU mapping list. The list is in the format: 00:07:36.409 [<,lcores[@CPUs]>...] 00:07:36.409 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:36.409 Within the group, '-' is used for range separator, 00:07:36.409 ',' is used for single number separator. 00:07:36.409 '( )' can be omitted for single element group, 00:07:36.409 '@' can be omitted if cpus and lcores have the same value 00:07:36.409 -n, --mem-channels channel number of memory channels used for DPDK 00:07:36.409 -p, --main-core main (primary) core for DPDK 00:07:36.409 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:36.409 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:36.409 --disable-cpumask-locks Disable CPU core lock files. 00:07:36.409 --silence-noticelog disable notice level logging to stderr 00:07:36.409 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:36.409 -u, --no-pci disable PCI access 00:07:36.409 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:36.409 --max-delay maximum reactor delay (in microseconds) 00:07:36.409 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:36.409 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:36.409 -R, --huge-unlink unlink huge files after initialization 00:07:36.409 -v, --version print SPDK version 00:07:36.409 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:36.409 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:36.409 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:36.409 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:36.409 Tracepoints vary in size and can use more than one trace entry. 00:07:36.409 --rpcs-allowed comma-separated list of permitted RPCS 00:07:36.409 --env-context Opaque context for use of the env implementation 00:07:36.409 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:36.409 --no-huge run without using hugepages 00:07:36.409 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:36.409 -e, --tpoint-group [:] 00:07:36.409 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:36.409 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:36.409 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:36.409 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:36.409 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:36.409 [2024-06-07 21:03:58.930266] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:07:36.409 passed 00:07:36.409 00:07:36.409 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.409 suites 1 1 n/a 0 0 00:07:36.409 tests 1 1 1 0 0 00:07:36.409 asserts 8 8 8 0 n/a 00:07:36.409 00:07:36.409 Elapsed time = 0.001 seconds 00:07:36.409 21:03:58 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:07:36.409 00:07:36.409 00:07:36.409 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.409 http://cunit.sourceforge.net/ 00:07:36.409 00:07:36.409 00:07:36.409 Suite: app_suite 00:07:36.409 Test: test_create_reactor ...passed 00:07:36.409 Test: test_init_reactors ...passed 00:07:36.409 Test: test_event_call ...passed 00:07:36.409 Test: test_schedule_thread ...passed 00:07:36.409 Test: test_reschedule_thread ...passed 00:07:36.409 Test: test_bind_thread ...passed 00:07:36.409 Test: test_for_each_reactor ...passed 00:07:36.410 Test: test_reactor_stats ...passed 00:07:36.410 Test: test_scheduler ...passed 00:07:36.410 Test: test_governor ...passed 00:07:36.410 00:07:36.410 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.410 suites 1 1 n/a 0 0 00:07:36.410 tests 10 10 10 0 0 00:07:36.410 asserts 344 344 344 0 n/a 00:07:36.410 00:07:36.410 Elapsed time = 0.015 seconds 00:07:36.410 00:07:36.410 real 0m0.085s 00:07:36.410 user 0m0.049s 00:07:36.410 sys 0m0.037s 00:07:36.410 21:03:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.410 21:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:36.410 ************************************ 00:07:36.410 END TEST unittest_event 00:07:36.410 ************************************ 00:07:36.410 21:03:59 -- unit/unittest.sh@233 -- # uname -s 00:07:36.410 21:03:59 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:07:36.410 21:03:59 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:07:36.410 21:03:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.410 21:03:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.410 21:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:36.410 ************************************ 00:07:36.410 START TEST unittest_ftl 00:07:36.410 ************************************ 00:07:36.410 21:03:59 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:07:36.410 21:03:59 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:07:36.410 00:07:36.410 00:07:36.410 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.410 http://cunit.sourceforge.net/ 00:07:36.410 00:07:36.410 00:07:36.410 Suite: ftl_band_suite 00:07:36.668 Test: test_band_block_offset_from_addr_base ...passed 00:07:36.668 Test: test_band_block_offset_from_addr_offset ...passed 00:07:36.668 Test: test_band_addr_from_block_offset ...passed 00:07:36.668 Test: test_band_set_addr ...passed 00:07:36.668 Test: test_invalidate_addr ...passed 00:07:36.668 Test: test_next_xfer_addr ...passed 00:07:36.668 00:07:36.669 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.669 suites 1 1 n/a 0 0 00:07:36.669 tests 6 6 6 0 0 00:07:36.669 asserts 30356 30356 30356 0 n/a 00:07:36.669 00:07:36.669 Elapsed time = 0.149 seconds 00:07:36.669 21:03:59 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:07:36.669 00:07:36.669 00:07:36.669 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.669 http://cunit.sourceforge.net/ 00:07:36.669 00:07:36.669 00:07:36.669 Suite: ftl_bitmap 00:07:36.669 Test: test_ftl_bitmap_create ...[2024-06-07 21:03:59.285284] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:07:36.669 passed 00:07:36.669 Test: test_ftl_bitmap_get ...[2024-06-07 21:03:59.285900] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:07:36.669 passed 00:07:36.669 Test: test_ftl_bitmap_set ...passed 00:07:36.669 Test: test_ftl_bitmap_clear ...passed 00:07:36.669 Test: test_ftl_bitmap_find_first_set ...passed 00:07:36.669 Test: test_ftl_bitmap_find_first_clear ...passed 00:07:36.669 Test: test_ftl_bitmap_count_set ...passed 00:07:36.669 00:07:36.669 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.669 suites 1 1 n/a 0 0 00:07:36.669 tests 7 7 7 0 0 00:07:36.669 asserts 137 137 137 0 n/a 00:07:36.669 00:07:36.669 Elapsed time = 0.002 seconds 00:07:36.669 21:03:59 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:07:36.669 00:07:36.669 00:07:36.669 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.669 http://cunit.sourceforge.net/ 00:07:36.669 00:07:36.669 00:07:36.669 Suite: ftl_io_suite 00:07:36.669 Test: test_completion ...passed 00:07:36.669 Test: test_multiple_ios ...passed 00:07:36.669 00:07:36.669 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.669 suites 1 1 n/a 0 0 00:07:36.669 tests 2 2 2 0 0 00:07:36.669 asserts 47 47 47 0 n/a 00:07:36.669 00:07:36.669 Elapsed time = 0.003 seconds 00:07:36.928 21:03:59 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:07:36.928 00:07:36.928 00:07:36.928 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.928 http://cunit.sourceforge.net/ 00:07:36.928 00:07:36.928 00:07:36.928 Suite: ftl_mngt 00:07:36.928 Test: test_next_step ...passed 00:07:36.928 Test: test_continue_step ...passed 00:07:36.928 Test: test_get_func_and_step_cntx_alloc ...passed 00:07:36.928 Test: test_fail_step ...passed 00:07:36.928 Test: test_mngt_call_and_call_rollback ...passed 00:07:36.928 Test: test_nested_process_failure ...passed 00:07:36.928 00:07:36.928 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.928 suites 1 1 n/a 0 0 00:07:36.928 tests 6 6 6 0 0 00:07:36.928 asserts 176 176 176 0 n/a 00:07:36.928 00:07:36.928 Elapsed time = 0.001 seconds 00:07:36.928 21:03:59 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:07:36.928 00:07:36.928 00:07:36.928 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.928 http://cunit.sourceforge.net/ 00:07:36.928 00:07:36.928 00:07:36.928 Suite: ftl_mempool 00:07:36.928 Test: test_ftl_mempool_create ...passed 00:07:36.928 Test: test_ftl_mempool_get_put ...passed 00:07:36.928 00:07:36.928 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.928 suites 1 1 n/a 0 0 00:07:36.928 tests 2 2 2 0 0 00:07:36.928 asserts 36 36 36 0 n/a 00:07:36.928 00:07:36.928 Elapsed time = 0.000 seconds 00:07:36.928 21:03:59 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:07:36.928 00:07:36.928 00:07:36.928 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.928 http://cunit.sourceforge.net/ 00:07:36.928 00:07:36.928 00:07:36.928 Suite: ftl_addr64_suite 00:07:36.928 Test: test_addr_cached ...passed 00:07:36.928 00:07:36.928 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.928 suites 1 1 n/a 0 0 00:07:36.928 tests 1 1 1 0 0 00:07:36.928 asserts 1536 1536 1536 0 n/a 00:07:36.928 00:07:36.928 Elapsed time = 0.001 seconds 00:07:36.928 21:03:59 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:07:36.928 00:07:36.928 00:07:36.929 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.929 http://cunit.sourceforge.net/ 00:07:36.929 00:07:36.929 00:07:36.929 Suite: ftl_sb 00:07:36.929 Test: test_sb_crc_v2 ...passed 00:07:36.929 Test: test_sb_crc_v3 ...passed 00:07:36.929 Test: test_sb_v3_md_layout ...[2024-06-07 21:03:59.439752] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:07:36.929 [2024-06-07 21:03:59.440134] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:36.929 [2024-06-07 21:03:59.440187] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:36.929 [2024-06-07 21:03:59.440220] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:36.929 [2024-06-07 21:03:59.440248] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:36.929 [2024-06-07 21:03:59.440351] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:07:36.929 [2024-06-07 21:03:59.440384] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:36.929 [2024-06-07 21:03:59.440430] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:36.929 [2024-06-07 21:03:59.440512] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:36.929 passed 00:07:36.929 Test: test_sb_v5_md_layout ...[2024-06-07 21:03:59.440559] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:36.929 [2024-06-07 21:03:59.440585] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:36.929 passed 00:07:36.929 00:07:36.929 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.929 suites 1 1 n/a 0 0 00:07:36.929 tests 4 4 4 0 0 00:07:36.929 asserts 148 148 148 0 n/a 00:07:36.929 00:07:36.929 Elapsed time = 0.002 seconds 00:07:36.929 21:03:59 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:07:36.929 00:07:36.929 00:07:36.929 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.929 http://cunit.sourceforge.net/ 00:07:36.929 00:07:36.929 00:07:36.929 Suite: ftl_layout_upgrade 00:07:36.929 Test: test_l2p_upgrade ...passed 00:07:36.929 00:07:36.929 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.929 suites 1 1 n/a 0 0 00:07:36.929 tests 1 1 1 0 0 00:07:36.929 asserts 140 140 140 0 n/a 00:07:36.929 00:07:36.929 Elapsed time = 0.001 seconds 00:07:36.929 00:07:36.929 real 0m0.438s 00:07:36.929 user 0m0.224s 00:07:36.929 sys 0m0.214s 00:07:36.929 ************************************ 00:07:36.929 END TEST unittest_ftl 00:07:36.929 ************************************ 00:07:36.929 21:03:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.929 21:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:36.929 21:03:59 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:36.929 21:03:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.929 21:03:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.929 21:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:36.929 ************************************ 00:07:36.929 START TEST unittest_accel 00:07:36.929 ************************************ 00:07:36.929 21:03:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:36.929 00:07:36.929 00:07:36.929 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.929 http://cunit.sourceforge.net/ 00:07:36.929 00:07:36.929 00:07:36.929 Suite: accel_sequence 00:07:36.929 Test: test_sequence_fill_copy ...passed 00:07:36.929 Test: test_sequence_abort ...passed 00:07:36.929 Test: test_sequence_append_error ...passed 00:07:36.929 Test: test_sequence_completion_error ...[2024-06-07 21:03:59.566577] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f82bf1227c0 00:07:36.929 [2024-06-07 21:03:59.566926] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f82bf1227c0 00:07:36.929 [2024-06-07 21:03:59.566986] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f82bf1227c0 00:07:36.929 [2024-06-07 21:03:59.567038] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f82bf1227c0 00:07:36.929 passed 00:07:36.929 Test: test_sequence_decompress ...passed 00:07:36.929 Test: test_sequence_reverse ...passed 00:07:36.929 Test: test_sequence_copy_elision ...passed 00:07:36.929 Test: test_sequence_accel_buffers ...passed 00:07:36.929 Test: test_sequence_memory_domain ...[2024-06-07 21:03:59.579126] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:07:36.929 [2024-06-07 21:03:59.579327] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:07:36.929 passed 00:07:36.929 Test: test_sequence_module_memory_domain ...passed 00:07:36.929 Test: test_sequence_crypto ...passed 00:07:36.929 Test: test_sequence_driver ...[2024-06-07 21:03:59.586313] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f82be1237c0 using driver: ut 00:07:36.929 [2024-06-07 21:03:59.586418] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f82be1237c0 through driver: ut 00:07:36.929 passed 00:07:36.929 Test: test_sequence_same_iovs ...passed 00:07:36.929 Test: test_sequence_crc32 ...passed 00:07:36.929 Suite: accel 00:07:36.929 Test: test_spdk_accel_task_complete ...passed 00:07:36.929 Test: test_get_task ...passed 00:07:36.929 Test: test_spdk_accel_submit_copy ...passed 00:07:36.929 Test: test_spdk_accel_submit_dualcast ...[2024-06-07 21:03:59.591715] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:36.929 [2024-06-07 21:03:59.591796] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:36.929 passed 00:07:36.929 Test: test_spdk_accel_submit_compare ...passed 00:07:36.929 Test: test_spdk_accel_submit_fill ...passed 00:07:36.929 Test: test_spdk_accel_submit_crc32c ...passed 00:07:36.929 Test: test_spdk_accel_submit_crc32cv ...passed 00:07:36.929 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:07:36.929 Test: test_spdk_accel_submit_xor ...passed 00:07:36.929 Test: test_spdk_accel_module_find_by_name ...passed 00:07:36.929 Test: test_spdk_accel_module_register ...passed 00:07:36.929 00:07:36.929 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.929 suites 2 2 n/a 0 0 00:07:36.929 tests 26 26 26 0 0 00:07:36.929 asserts 831 831 831 0 n/a 00:07:36.929 00:07:36.929 Elapsed time = 0.037 seconds 00:07:37.188 00:07:37.188 real 0m0.075s 00:07:37.188 user 0m0.050s 00:07:37.188 sys 0m0.026s 00:07:37.188 21:03:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.188 21:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:37.188 ************************************ 00:07:37.188 END TEST unittest_accel 00:07:37.188 ************************************ 00:07:37.188 21:03:59 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:37.188 21:03:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.188 21:03:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.188 21:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:37.188 ************************************ 00:07:37.188 START TEST unittest_ioat 00:07:37.188 ************************************ 00:07:37.188 21:03:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:37.188 00:07:37.188 00:07:37.188 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.188 http://cunit.sourceforge.net/ 00:07:37.188 00:07:37.188 00:07:37.188 Suite: ioat 00:07:37.188 Test: ioat_state_check ...passed 00:07:37.188 00:07:37.188 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.188 suites 1 1 n/a 0 0 00:07:37.188 tests 1 1 1 0 0 00:07:37.188 asserts 32 32 32 0 n/a 00:07:37.188 00:07:37.188 Elapsed time = 0.000 seconds 00:07:37.188 00:07:37.188 real 0m0.027s 00:07:37.188 user 0m0.017s 00:07:37.188 sys 0m0.009s 00:07:37.188 21:03:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.188 21:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:37.188 ************************************ 00:07:37.188 END TEST unittest_ioat 00:07:37.188 ************************************ 00:07:37.188 21:03:59 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:37.188 21:03:59 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:37.188 21:03:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.188 21:03:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.188 21:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:37.188 ************************************ 00:07:37.188 START TEST unittest_idxd_user 00:07:37.189 ************************************ 00:07:37.189 21:03:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:37.189 00:07:37.189 00:07:37.189 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.189 http://cunit.sourceforge.net/ 00:07:37.189 00:07:37.189 00:07:37.189 Suite: idxd_user 00:07:37.189 Test: test_idxd_wait_cmd ...[2024-06-07 21:03:59.742712] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:37.189 [2024-06-07 21:03:59.743116] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:07:37.189 passed 00:07:37.189 Test: test_idxd_reset_dev ...[2024-06-07 21:03:59.743564] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:37.189 [2024-06-07 21:03:59.743703] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:07:37.189 passed 00:07:37.189 Test: test_idxd_group_config ...passed 00:07:37.189 Test: test_idxd_wq_config ...passed 00:07:37.189 00:07:37.189 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.189 suites 1 1 n/a 0 0 00:07:37.189 tests 4 4 4 0 0 00:07:37.189 asserts 20 20 20 0 n/a 00:07:37.189 00:07:37.189 Elapsed time = 0.001 seconds 00:07:37.189 00:07:37.189 real 0m0.034s 00:07:37.189 user 0m0.014s 00:07:37.189 sys 0m0.019s 00:07:37.189 21:03:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.189 ************************************ 00:07:37.189 END TEST unittest_idxd_user 00:07:37.189 ************************************ 00:07:37.189 21:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:37.189 21:03:59 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:07:37.189 21:03:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.189 21:03:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.189 21:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:37.189 ************************************ 00:07:37.189 START TEST unittest_iscsi 00:07:37.189 ************************************ 00:07:37.189 21:03:59 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:07:37.189 21:03:59 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:07:37.189 00:07:37.189 00:07:37.189 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.189 http://cunit.sourceforge.net/ 00:07:37.189 00:07:37.189 00:07:37.189 Suite: conn_suite 00:07:37.189 Test: read_task_split_in_order_case ...passed 00:07:37.189 Test: read_task_split_reverse_order_case ...passed 00:07:37.189 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:07:37.189 Test: process_non_read_task_completion_test ...passed 00:07:37.189 Test: free_tasks_on_connection ...passed 00:07:37.189 Test: free_tasks_with_queued_datain ...passed 00:07:37.189 Test: abort_queued_datain_task_test ...passed 00:07:37.189 Test: abort_queued_datain_tasks_test ...passed 00:07:37.189 00:07:37.189 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.189 suites 1 1 n/a 0 0 00:07:37.189 tests 8 8 8 0 0 00:07:37.189 asserts 230 230 230 0 n/a 00:07:37.189 00:07:37.189 Elapsed time = 0.000 seconds 00:07:37.189 21:03:59 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:07:37.189 00:07:37.189 00:07:37.189 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.189 http://cunit.sourceforge.net/ 00:07:37.189 00:07:37.189 00:07:37.189 Suite: iscsi_suite 00:07:37.448 Test: param_negotiation_test ...passed 00:07:37.448 Test: list_negotiation_test ...passed 00:07:37.448 Test: parse_valid_test ...passed 00:07:37.448 Test: parse_invalid_test ...[2024-06-07 21:03:59.866837] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:37.448 [2024-06-07 21:03:59.867215] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:37.448 [2024-06-07 21:03:59.867271] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:07:37.448 [2024-06-07 21:03:59.867353] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:07:37.448 [2024-06-07 21:03:59.867508] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:07:37.448 [2024-06-07 21:03:59.867583] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:07:37.448 [2024-06-07 21:03:59.867716] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:07:37.448 passed 00:07:37.448 00:07:37.448 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.448 suites 1 1 n/a 0 0 00:07:37.448 tests 4 4 4 0 0 00:07:37.448 asserts 161 161 161 0 n/a 00:07:37.448 00:07:37.448 Elapsed time = 0.005 seconds 00:07:37.448 21:03:59 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:07:37.448 00:07:37.448 00:07:37.448 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.448 http://cunit.sourceforge.net/ 00:07:37.448 00:07:37.448 00:07:37.448 Suite: iscsi_target_node_suite 00:07:37.448 Test: add_lun_test_cases ...[2024-06-07 21:03:59.897837] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:07:37.448 [2024-06-07 21:03:59.898191] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:07:37.448 [2024-06-07 21:03:59.898281] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:37.448 passed 00:07:37.448 Test: allow_any_allowed ...[2024-06-07 21:03:59.898326] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:37.448 [2024-06-07 21:03:59.898360] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:07:37.448 passed 00:07:37.448 Test: allow_ipv6_allowed ...passed 00:07:37.448 Test: allow_ipv6_denied ...passed 00:07:37.448 Test: allow_ipv6_invalid ...passed 00:07:37.448 Test: allow_ipv4_allowed ...passed 00:07:37.448 Test: allow_ipv4_denied ...passed 00:07:37.448 Test: allow_ipv4_invalid ...passed 00:07:37.448 Test: node_access_allowed ...passed 00:07:37.448 Test: node_access_denied_by_empty_netmask ...passed 00:07:37.448 Test: node_access_multi_initiator_groups_cases ...passed 00:07:37.448 Test: allow_iscsi_name_multi_maps_case ...passed 00:07:37.448 Test: chap_param_test_cases ...passed 00:07:37.448 00:07:37.449 [2024-06-07 21:03:59.898778] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:07:37.449 [2024-06-07 21:03:59.898811] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:07:37.449 [2024-06-07 21:03:59.898863] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:07:37.449 [2024-06-07 21:03:59.898885] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:07:37.449 [2024-06-07 21:03:59.898911] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:07:37.449 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.449 suites 1 1 n/a 0 0 00:07:37.449 tests 13 13 13 0 0 00:07:37.449 asserts 50 50 50 0 n/a 00:07:37.449 00:07:37.449 Elapsed time = 0.001 seconds 00:07:37.449 21:03:59 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:07:37.449 00:07:37.449 00:07:37.449 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.449 http://cunit.sourceforge.net/ 00:07:37.449 00:07:37.449 00:07:37.449 Suite: iscsi_suite 00:07:37.449 Test: op_login_check_target_test ...passed 00:07:37.449 Test: op_login_session_normal_test ...[2024-06-07 21:03:59.932333] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:07:37.449 [2024-06-07 21:03:59.932717] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:37.449 [2024-06-07 21:03:59.932757] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:37.449 [2024-06-07 21:03:59.932788] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:37.449 [2024-06-07 21:03:59.932826] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:07:37.449 [2024-06-07 21:03:59.932933] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:37.449 [2024-06-07 21:03:59.933019] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:07:37.449 [2024-06-07 21:03:59.933073] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:37.449 passed 00:07:37.449 Test: maxburstlength_test ...[2024-06-07 21:03:59.933356] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:37.449 [2024-06-07 21:03:59.933413] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:07:37.449 passed 00:07:37.449 Test: underflow_for_read_transfer_test ...passed 00:07:37.449 Test: underflow_for_zero_read_transfer_test ...passed 00:07:37.449 Test: underflow_for_request_sense_test ...passed 00:07:37.449 Test: underflow_for_check_condition_test ...passed 00:07:37.449 Test: add_transfer_task_test ...passed 00:07:37.449 Test: get_transfer_task_test ...passed 00:07:37.449 Test: del_transfer_task_test ...passed 00:07:37.449 Test: clear_all_transfer_tasks_test ...passed 00:07:37.449 Test: build_iovs_test ...passed 00:07:37.449 Test: build_iovs_with_md_test ...passed 00:07:37.449 Test: pdu_hdr_op_login_test ...[2024-06-07 21:03:59.934864] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:07:37.449 [2024-06-07 21:03:59.934968] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:07:37.449 passed 00:07:37.449 Test: pdu_hdr_op_text_test ...[2024-06-07 21:03:59.935048] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:07:37.449 [2024-06-07 21:03:59.935154] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:37.449 [2024-06-07 21:03:59.935248] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:07:37.449 passed 00:07:37.449 Test: pdu_hdr_op_logout_test ...[2024-06-07 21:03:59.935288] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:07:37.449 [2024-06-07 21:03:59.935382] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:07:37.449 passed 00:07:37.449 Test: pdu_hdr_op_scsi_test ...[2024-06-07 21:03:59.935565] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:37.449 [2024-06-07 21:03:59.935593] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:37.449 [2024-06-07 21:03:59.935635] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:07:37.449 [2024-06-07 21:03:59.935730] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:37.449 [2024-06-07 21:03:59.935813] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:07:37.449 [2024-06-07 21:03:59.936005] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:37.449 passed 00:07:37.449 Test: pdu_hdr_op_task_mgmt_test ...[2024-06-07 21:03:59.936143] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:07:37.449 [2024-06-07 21:03:59.936252] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:07:37.449 passed 00:07:37.449 Test: pdu_hdr_op_nopout_test ...[2024-06-07 21:03:59.936517] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:07:37.449 passed 00:07:37.449 Test: pdu_hdr_op_data_test ...[2024-06-07 21:03:59.936618] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:37.449 [2024-06-07 21:03:59.936658] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:37.449 [2024-06-07 21:03:59.936692] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:07:37.449 [2024-06-07 21:03:59.936725] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:07:37.449 [2024-06-07 21:03:59.936784] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:37.449 [2024-06-07 21:03:59.936861] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:37.449 [2024-06-07 21:03:59.936927] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:07:37.449 [2024-06-07 21:03:59.936984] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:07:37.449 [2024-06-07 21:03:59.937058] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:07:37.449 [2024-06-07 21:03:59.937093] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:07:37.449 passed 00:07:37.449 Test: empty_text_with_cbit_test ...passed 00:07:37.449 Test: pdu_payload_read_test ...[2024-06-07 21:03:59.939222] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:07:37.449 passed 00:07:37.449 Test: data_out_pdu_sequence_test ...passed 00:07:37.449 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:07:37.449 00:07:37.449 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.449 suites 1 1 n/a 0 0 00:07:37.449 tests 24 24 24 0 0 00:07:37.449 asserts 150253 150253 150253 0 n/a 00:07:37.449 00:07:37.449 Elapsed time = 0.017 seconds 00:07:37.449 21:03:59 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:07:37.449 00:07:37.449 00:07:37.449 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.449 http://cunit.sourceforge.net/ 00:07:37.449 00:07:37.449 00:07:37.449 Suite: init_grp_suite 00:07:37.449 Test: create_initiator_group_success_case ...passed 00:07:37.449 Test: find_initiator_group_success_case ...passed 00:07:37.449 Test: register_initiator_group_twice_case ...passed 00:07:37.449 Test: add_initiator_name_success_case ...passed 00:07:37.449 Test: add_initiator_name_fail_case ...[2024-06-07 21:03:59.985376] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:07:37.449 passed 00:07:37.449 Test: delete_all_initiator_names_success_case ...passed 00:07:37.449 Test: add_netmask_success_case ...passed 00:07:37.449 Test: add_netmask_fail_case ...[2024-06-07 21:03:59.985831] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:07:37.449 passed 00:07:37.449 Test: delete_all_netmasks_success_case ...passed 00:07:37.449 Test: initiator_name_overwrite_all_to_any_case ...passed 00:07:37.449 Test: netmask_overwrite_all_to_any_case ...passed 00:07:37.449 Test: add_delete_initiator_names_case ...passed 00:07:37.449 Test: add_duplicated_initiator_names_case ...passed 00:07:37.449 Test: delete_nonexisting_initiator_names_case ...passed 00:07:37.449 Test: add_delete_netmasks_case ...passed 00:07:37.449 Test: add_duplicated_netmasks_case ...passed 00:07:37.449 Test: delete_nonexisting_netmasks_case ...passed 00:07:37.449 00:07:37.449 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.449 suites 1 1 n/a 0 0 00:07:37.449 tests 17 17 17 0 0 00:07:37.449 asserts 108 108 108 0 n/a 00:07:37.449 00:07:37.449 Elapsed time = 0.001 seconds 00:07:37.449 21:03:59 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:07:37.449 00:07:37.449 00:07:37.449 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.449 http://cunit.sourceforge.net/ 00:07:37.449 00:07:37.449 00:07:37.449 Suite: portal_grp_suite 00:07:37.449 Test: portal_create_ipv4_normal_case ...passed 00:07:37.449 Test: portal_create_ipv6_normal_case ...passed 00:07:37.449 Test: portal_create_ipv4_wildcard_case ...passed 00:07:37.449 Test: portal_create_ipv6_wildcard_case ...passed 00:07:37.450 Test: portal_create_twice_case ...[2024-06-07 21:04:00.018013] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:07:37.450 passed 00:07:37.450 Test: portal_grp_register_unregister_case ...passed 00:07:37.450 Test: portal_grp_register_twice_case ...passed 00:07:37.450 Test: portal_grp_add_delete_case ...passed 00:07:37.450 Test: portal_grp_add_delete_twice_case ...passed 00:07:37.450 00:07:37.450 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.450 suites 1 1 n/a 0 0 00:07:37.450 tests 9 9 9 0 0 00:07:37.450 asserts 44 44 44 0 n/a 00:07:37.450 00:07:37.450 Elapsed time = 0.003 seconds 00:07:37.450 00:07:37.450 real 0m0.224s 00:07:37.450 user 0m0.112s 00:07:37.450 sys 0m0.114s 00:07:37.450 21:04:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.450 ************************************ 00:07:37.450 END TEST unittest_iscsi 00:07:37.450 ************************************ 00:07:37.450 21:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:37.450 21:04:00 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:07:37.450 21:04:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.450 21:04:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.450 21:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:37.450 ************************************ 00:07:37.450 START TEST unittest_json 00:07:37.450 ************************************ 00:07:37.450 21:04:00 -- common/autotest_common.sh@1104 -- # unittest_json 00:07:37.450 21:04:00 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:07:37.450 00:07:37.450 00:07:37.450 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.450 http://cunit.sourceforge.net/ 00:07:37.450 00:07:37.450 00:07:37.450 Suite: json 00:07:37.450 Test: test_parse_literal ...passed 00:07:37.450 Test: test_parse_string_simple ...passed 00:07:37.450 Test: test_parse_string_control_chars ...passed 00:07:37.450 Test: test_parse_string_utf8 ...passed 00:07:37.450 Test: test_parse_string_escapes_twochar ...passed 00:07:37.450 Test: test_parse_string_escapes_unicode ...passed 00:07:37.450 Test: test_parse_number ...passed 00:07:37.450 Test: test_parse_array ...passed 00:07:37.450 Test: test_parse_object ...passed 00:07:37.450 Test: test_parse_nesting ...passed 00:07:37.450 Test: test_parse_comment ...passed 00:07:37.450 00:07:37.450 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.450 suites 1 1 n/a 0 0 00:07:37.450 tests 11 11 11 0 0 00:07:37.450 asserts 1516 1516 1516 0 n/a 00:07:37.450 00:07:37.450 Elapsed time = 0.001 seconds 00:07:37.709 21:04:00 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:07:37.709 00:07:37.709 00:07:37.709 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.709 http://cunit.sourceforge.net/ 00:07:37.709 00:07:37.709 00:07:37.709 Suite: json 00:07:37.709 Test: test_strequal ...passed 00:07:37.709 Test: test_num_to_uint16 ...passed 00:07:37.709 Test: test_num_to_int32 ...passed 00:07:37.709 Test: test_num_to_uint64 ...passed 00:07:37.709 Test: test_decode_object ...passed 00:07:37.709 Test: test_decode_array ...passed 00:07:37.709 Test: test_decode_bool ...passed 00:07:37.709 Test: test_decode_uint16 ...passed 00:07:37.709 Test: test_decode_int32 ...passed 00:07:37.709 Test: test_decode_uint32 ...passed 00:07:37.709 Test: test_decode_uint64 ...passed 00:07:37.709 Test: test_decode_string ...passed 00:07:37.709 Test: test_decode_uuid ...passed 00:07:37.709 Test: test_find ...passed 00:07:37.709 Test: test_find_array ...passed 00:07:37.709 Test: test_iterating ...passed 00:07:37.709 Test: test_free_object ...passed 00:07:37.709 00:07:37.709 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.709 suites 1 1 n/a 0 0 00:07:37.709 tests 17 17 17 0 0 00:07:37.709 asserts 236 236 236 0 n/a 00:07:37.709 00:07:37.709 Elapsed time = 0.001 seconds 00:07:37.709 21:04:00 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:07:37.709 00:07:37.709 00:07:37.709 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.709 http://cunit.sourceforge.net/ 00:07:37.709 00:07:37.709 00:07:37.709 Suite: json 00:07:37.709 Test: test_write_literal ...passed 00:07:37.709 Test: test_write_string_simple ...passed 00:07:37.709 Test: test_write_string_escapes ...passed 00:07:37.709 Test: test_write_string_utf16le ...passed 00:07:37.709 Test: test_write_number_int32 ...passed 00:07:37.709 Test: test_write_number_uint32 ...passed 00:07:37.709 Test: test_write_number_uint128 ...passed 00:07:37.709 Test: test_write_string_number_uint128 ...passed 00:07:37.709 Test: test_write_number_int64 ...passed 00:07:37.709 Test: test_write_number_uint64 ...passed 00:07:37.709 Test: test_write_number_double ...passed 00:07:37.709 Test: test_write_uuid ...passed 00:07:37.709 Test: test_write_array ...passed 00:07:37.709 Test: test_write_object ...passed 00:07:37.709 Test: test_write_nesting ...passed 00:07:37.709 Test: test_write_val ...passed 00:07:37.709 00:07:37.709 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.709 suites 1 1 n/a 0 0 00:07:37.709 tests 16 16 16 0 0 00:07:37.709 asserts 918 918 918 0 n/a 00:07:37.709 00:07:37.709 Elapsed time = 0.007 seconds 00:07:37.709 21:04:00 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:07:37.709 00:07:37.709 00:07:37.709 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.709 http://cunit.sourceforge.net/ 00:07:37.709 00:07:37.709 00:07:37.709 Suite: jsonrpc 00:07:37.709 Test: test_parse_request ...passed 00:07:37.709 Test: test_parse_request_streaming ...passed 00:07:37.709 00:07:37.709 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.709 suites 1 1 n/a 0 0 00:07:37.709 tests 2 2 2 0 0 00:07:37.709 asserts 289 289 289 0 n/a 00:07:37.709 00:07:37.709 Elapsed time = 0.004 seconds 00:07:37.709 00:07:37.709 real 0m0.135s 00:07:37.709 user 0m0.074s 00:07:37.709 sys 0m0.062s 00:07:37.709 21:04:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.709 ************************************ 00:07:37.709 END TEST unittest_json 00:07:37.709 ************************************ 00:07:37.709 21:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:37.709 21:04:00 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:07:37.709 21:04:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.709 21:04:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.709 21:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:37.709 ************************************ 00:07:37.709 START TEST unittest_rpc 00:07:37.709 ************************************ 00:07:37.709 21:04:00 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:07:37.709 21:04:00 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:07:37.709 00:07:37.709 00:07:37.709 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.709 http://cunit.sourceforge.net/ 00:07:37.709 00:07:37.709 00:07:37.709 Suite: rpc 00:07:37.709 Test: test_jsonrpc_handler ...passed 00:07:37.709 Test: test_spdk_rpc_is_method_allowed ...passed 00:07:37.709 Test: test_rpc_get_methods ...passed 00:07:37.709 Test: test_rpc_spdk_get_version ...passed 00:07:37.709 Test: test_spdk_rpc_listen_close ...passed 00:07:37.709 00:07:37.709 [2024-06-07 21:04:00.290687] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:07:37.709 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.709 suites 1 1 n/a 0 0 00:07:37.709 tests 5 5 5 0 0 00:07:37.709 asserts 20 20 20 0 n/a 00:07:37.709 00:07:37.709 Elapsed time = 0.000 seconds 00:07:37.709 00:07:37.709 real 0m0.028s 00:07:37.709 user 0m0.012s 00:07:37.709 sys 0m0.017s 00:07:37.709 21:04:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.709 21:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:37.709 ************************************ 00:07:37.709 END TEST unittest_rpc 00:07:37.709 ************************************ 00:07:37.709 21:04:00 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:37.709 21:04:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.709 21:04:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.709 21:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:37.709 ************************************ 00:07:37.709 START TEST unittest_notify 00:07:37.709 ************************************ 00:07:37.709 21:04:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:37.709 00:07:37.709 00:07:37.709 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.709 http://cunit.sourceforge.net/ 00:07:37.709 00:07:37.709 00:07:37.709 Suite: app_suite 00:07:37.709 Test: notify ...passed 00:07:37.709 00:07:37.709 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.709 suites 1 1 n/a 0 0 00:07:37.709 tests 1 1 1 0 0 00:07:37.709 asserts 13 13 13 0 n/a 00:07:37.709 00:07:37.709 Elapsed time = 0.000 seconds 00:07:37.969 00:07:37.969 real 0m0.030s 00:07:37.969 user 0m0.015s 00:07:37.969 sys 0m0.015s 00:07:37.969 21:04:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.969 21:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:37.969 ************************************ 00:07:37.969 END TEST unittest_notify 00:07:37.969 ************************************ 00:07:37.969 21:04:00 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:07:37.969 21:04:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.969 21:04:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.969 21:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:37.969 ************************************ 00:07:37.969 START TEST unittest_nvme 00:07:37.969 ************************************ 00:07:37.969 21:04:00 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:07:37.969 21:04:00 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:07:37.969 00:07:37.969 00:07:37.969 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.969 http://cunit.sourceforge.net/ 00:07:37.969 00:07:37.969 00:07:37.969 Suite: nvme 00:07:37.969 Test: test_opc_data_transfer ...passed 00:07:37.969 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:07:37.969 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:07:37.969 Test: test_trid_parse_and_compare ...[2024-06-07 21:04:00.455830] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:07:37.969 [2024-06-07 21:04:00.456302] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:37.969 [2024-06-07 21:04:00.456432] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:07:37.969 [2024-06-07 21:04:00.456487] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:37.969 [2024-06-07 21:04:00.456540] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:07:37.969 [2024-06-07 21:04:00.456677] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:37.969 passed 00:07:37.969 Test: test_trid_trtype_str ...passed 00:07:37.969 Test: test_trid_adrfam_str ...passed 00:07:37.969 Test: test_nvme_ctrlr_probe ...[2024-06-07 21:04:00.457094] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:37.969 passed 00:07:37.969 Test: test_spdk_nvme_probe ...[2024-06-07 21:04:00.457241] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:37.969 [2024-06-07 21:04:00.457267] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:37.969 [2024-06-07 21:04:00.457356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:07:37.969 [2024-06-07 21:04:00.457389] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:37.969 passed 00:07:37.969 Test: test_spdk_nvme_connect ...[2024-06-07 21:04:00.457483] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:07:37.969 [2024-06-07 21:04:00.457847] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:37.969 [2024-06-07 21:04:00.457918] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:07:37.969 passed 00:07:37.969 Test: test_nvme_ctrlr_probe_internal ...[2024-06-07 21:04:00.458067] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:37.969 [2024-06-07 21:04:00.458114] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:07:37.969 passed 00:07:37.969 Test: test_nvme_init_controllers ...[2024-06-07 21:04:00.458199] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:07:37.969 passed 00:07:37.969 Test: test_nvme_driver_init ...[2024-06-07 21:04:00.458308] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:07:37.969 [2024-06-07 21:04:00.458339] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:37.969 [2024-06-07 21:04:00.572161] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:07:37.969 [2024-06-07 21:04:00.572272] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:07:37.969 passed 00:07:37.969 Test: test_spdk_nvme_detach ...passed 00:07:37.969 Test: test_nvme_completion_poll_cb ...passed 00:07:37.969 Test: test_nvme_user_copy_cmd_complete ...passed 00:07:37.969 Test: test_nvme_allocate_request_null ...passed 00:07:37.969 Test: test_nvme_allocate_request ...passed 00:07:37.969 Test: test_nvme_free_request ...passed 00:07:37.969 Test: test_nvme_allocate_request_user_copy ...passed 00:07:37.969 Test: test_nvme_robust_mutex_init_shared ...passed 00:07:37.969 Test: test_nvme_request_check_timeout ...passed 00:07:37.969 Test: test_nvme_wait_for_completion ...passed 00:07:37.970 Test: test_spdk_nvme_parse_func ...passed 00:07:37.970 Test: test_spdk_nvme_detach_async ...passed 00:07:37.970 Test: test_nvme_parse_addr ...[2024-06-07 21:04:00.573118] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:07:37.970 passed 00:07:37.970 00:07:37.970 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.970 suites 1 1 n/a 0 0 00:07:37.970 tests 25 25 25 0 0 00:07:37.970 asserts 326 326 326 0 n/a 00:07:37.970 00:07:37.970 Elapsed time = 0.006 seconds 00:07:37.970 21:04:00 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:07:37.970 00:07:37.970 00:07:37.970 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.970 http://cunit.sourceforge.net/ 00:07:37.970 00:07:37.970 00:07:37.970 Suite: nvme_ctrlr 00:07:37.970 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-06-07 21:04:00.607524] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:37.970 passed 00:07:37.970 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-06-07 21:04:00.609294] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:37.970 passed 00:07:37.970 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-06-07 21:04:00.610617] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:37.970 passed 00:07:37.970 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-06-07 21:04:00.611939] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:37.970 passed 00:07:37.970 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-06-07 21:04:00.613358] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:37.970 [2024-06-07 21:04:00.614566] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-07 21:04:00.615842] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-07 21:04:00.617026] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:37.970 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-06-07 21:04:00.619572] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:37.970 [2024-06-07 21:04:00.622046] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-07 21:04:00.623306] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:37.970 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-06-07 21:04:00.625804] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:37.970 [2024-06-07 21:04:00.627088] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-07 21:04:00.629606] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:37.970 Test: test_nvme_ctrlr_init_delay ...[2024-06-07 21:04:00.632270] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:37.970 passed 00:07:37.970 Test: test_alloc_io_qpair_rr_1 ...[2024-06-07 21:04:00.633704] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:37.970 [2024-06-07 21:04:00.633846] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:37.970 [2024-06-07 21:04:00.634026] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:37.970 [2024-06-07 21:04:00.634081] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:37.970 [2024-06-07 21:04:00.634116] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:37.970 [2024-06-07 21:04:00.634253] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:37.970 passed 00:07:37.970 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:07:37.970 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:07:37.970 Test: test_alloc_io_qpair_wrr_1 ...passed 00:07:37.970 Test: test_alloc_io_qpair_wrr_2 ...[2024-06-07 21:04:00.634447] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:37.970 [2024-06-07 21:04:00.634576] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:37.970 passed 00:07:37.970 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-06-07 21:04:00.634857] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4832:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:07:37.970 passed 00:07:37.970 Test: test_nvme_ctrlr_fail ...passed 00:07:37.970 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:07:37.970 Test: test_nvme_ctrlr_set_supported_features ...passed 00:07:37.970 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:07:37.970 Test: test_nvme_ctrlr_test_active_ns ...[2024-06-07 21:04:00.635015] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:37.970 [2024-06-07 21:04:00.635110] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4909:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:07:37.970 [2024-06-07 21:04:00.635164] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:37.970 [2024-06-07 21:04:00.635221] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:07:37.970 [2024-06-07 21:04:00.635517] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:38.538 passed 00:07:38.538 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:07:38.538 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:07:38.538 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:07:38.538 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-06-07 21:04:00.953707] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:38.538 passed 00:07:38.538 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-06-07 21:04:00.961019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:38.538 passed 00:07:38.538 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-06-07 21:04:00.962313] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:38.538 [2024-06-07 21:04:00.962379] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2869:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:07:38.538 passed 00:07:38.538 Test: test_alloc_io_qpair_fail ...[2024-06-07 21:04:00.963560] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:38.538 passed 00:07:38.538 Test: test_nvme_ctrlr_add_remove_process ...passed 00:07:38.538 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:07:38.538 Test: test_nvme_ctrlr_set_state ...passed 00:07:38.538 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-06-07 21:04:00.963706] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:07:38.538 [2024-06-07 21:04:00.963881] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1464:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:07:38.538 [2024-06-07 21:04:00.963925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:38.538 passed 00:07:38.538 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-06-07 21:04:00.987226] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:38.538 passed 00:07:38.538 Test: test_nvme_ctrlr_ns_mgmt ...[2024-06-07 21:04:01.029409] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:38.538 passed 00:07:38.538 Test: test_nvme_ctrlr_reset ...[2024-06-07 21:04:01.031019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:38.538 passed 00:07:38.538 Test: test_nvme_ctrlr_aer_callback ...[2024-06-07 21:04:01.031471] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:38.538 passed 00:07:38.538 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-06-07 21:04:01.032923] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:38.538 passed 00:07:38.538 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:07:38.538 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:07:38.538 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-06-07 21:04:01.034733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:38.538 passed 00:07:38.538 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:07:38.538 Test: test_nvme_ctrlr_ana_resize ...[2024-06-07 21:04:01.036180] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:38.538 passed 00:07:38.538 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:07:38.538 Test: test_nvme_transport_ctrlr_ready ...passed 00:07:38.538 Test: test_nvme_ctrlr_disable ...[2024-06-07 21:04:01.037826] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4015:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:07:38.538 [2024-06-07 21:04:01.037876] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:07:38.538 [2024-06-07 21:04:01.037917] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:38.538 passed 00:07:38.538 00:07:38.538 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.538 suites 1 1 n/a 0 0 00:07:38.538 tests 43 43 43 0 0 00:07:38.538 asserts 10418 10418 10418 0 n/a 00:07:38.538 00:07:38.538 Elapsed time = 0.390 seconds 00:07:38.538 21:04:01 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:07:38.538 00:07:38.538 00:07:38.538 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.538 http://cunit.sourceforge.net/ 00:07:38.538 00:07:38.538 00:07:38.538 Suite: nvme_ctrlr_cmd 00:07:38.538 Test: test_get_log_pages ...passed 00:07:38.538 Test: test_set_feature_cmd ...passed 00:07:38.538 Test: test_set_feature_ns_cmd ...passed 00:07:38.538 Test: test_get_feature_cmd ...passed 00:07:38.538 Test: test_get_feature_ns_cmd ...passed 00:07:38.538 Test: test_abort_cmd ...passed 00:07:38.538 Test: test_set_host_id_cmds ...[2024-06-07 21:04:01.086041] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 502:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:07:38.538 passed 00:07:38.538 Test: test_io_cmd_raw_no_payload_build ...passed 00:07:38.538 Test: test_io_raw_cmd ...passed 00:07:38.538 Test: test_io_raw_cmd_with_md ...passed 00:07:38.538 Test: test_namespace_attach ...passed 00:07:38.538 Test: test_namespace_detach ...passed 00:07:38.538 Test: test_namespace_create ...passed 00:07:38.538 Test: test_namespace_delete ...passed 00:07:38.538 Test: test_doorbell_buffer_config ...passed 00:07:38.538 Test: test_format_nvme ...passed 00:07:38.538 Test: test_fw_commit ...passed 00:07:38.538 Test: test_fw_image_download ...passed 00:07:38.538 Test: test_sanitize ...passed 00:07:38.538 Test: test_directive ...passed 00:07:38.538 Test: test_nvme_request_add_abort ...passed 00:07:38.538 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:07:38.538 Test: test_nvme_ctrlr_cmd_identify ...passed 00:07:38.538 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:07:38.538 00:07:38.538 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.538 suites 1 1 n/a 0 0 00:07:38.538 tests 24 24 24 0 0 00:07:38.538 asserts 198 198 198 0 n/a 00:07:38.538 00:07:38.538 Elapsed time = 0.001 seconds 00:07:38.538 21:04:01 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:07:38.538 00:07:38.538 00:07:38.538 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.538 http://cunit.sourceforge.net/ 00:07:38.538 00:07:38.538 00:07:38.538 Suite: nvme_ctrlr_cmd 00:07:38.538 Test: test_geometry_cmd ...passed 00:07:38.538 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:07:38.538 00:07:38.538 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.538 suites 1 1 n/a 0 0 00:07:38.538 tests 2 2 2 0 0 00:07:38.538 asserts 7 7 7 0 n/a 00:07:38.538 00:07:38.538 Elapsed time = 0.000 seconds 00:07:38.538 21:04:01 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:07:38.538 00:07:38.538 00:07:38.538 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.538 http://cunit.sourceforge.net/ 00:07:38.538 00:07:38.538 00:07:38.538 Suite: nvme 00:07:38.538 Test: test_nvme_ns_construct ...passed 00:07:38.538 Test: test_nvme_ns_uuid ...passed 00:07:38.538 Test: test_nvme_ns_csi ...passed 00:07:38.538 Test: test_nvme_ns_data ...passed 00:07:38.538 Test: test_nvme_ns_set_identify_data ...passed 00:07:38.538 Test: test_spdk_nvme_ns_get_values ...passed 00:07:38.538 Test: test_spdk_nvme_ns_is_active ...passed 00:07:38.538 Test: spdk_nvme_ns_supports ...passed 00:07:38.538 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:07:38.538 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:07:38.538 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:07:38.538 Test: test_nvme_ns_find_id_desc ...passed 00:07:38.538 00:07:38.538 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.538 suites 1 1 n/a 0 0 00:07:38.539 tests 12 12 12 0 0 00:07:38.539 asserts 83 83 83 0 n/a 00:07:38.539 00:07:38.539 Elapsed time = 0.001 seconds 00:07:38.539 21:04:01 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:07:38.539 00:07:38.539 00:07:38.539 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.539 http://cunit.sourceforge.net/ 00:07:38.539 00:07:38.539 00:07:38.539 Suite: nvme_ns_cmd 00:07:38.539 Test: split_test ...passed 00:07:38.539 Test: split_test2 ...passed 00:07:38.539 Test: split_test3 ...passed 00:07:38.539 Test: split_test4 ...passed 00:07:38.539 Test: test_nvme_ns_cmd_flush ...passed 00:07:38.539 Test: test_nvme_ns_cmd_dataset_management ...passed 00:07:38.539 Test: test_nvme_ns_cmd_copy ...passed 00:07:38.539 Test: test_io_flags ...[2024-06-07 21:04:01.170480] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:07:38.539 passed 00:07:38.539 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:07:38.539 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:07:38.539 Test: test_nvme_ns_cmd_reservation_register ...passed 00:07:38.539 Test: test_nvme_ns_cmd_reservation_release ...passed 00:07:38.539 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:07:38.539 Test: test_nvme_ns_cmd_reservation_report ...passed 00:07:38.539 Test: test_cmd_child_request ...passed 00:07:38.539 Test: test_nvme_ns_cmd_readv ...passed 00:07:38.539 Test: test_nvme_ns_cmd_read_with_md ...passed 00:07:38.539 Test: test_nvme_ns_cmd_writev ...[2024-06-07 21:04:01.171555] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:07:38.539 passed 00:07:38.539 Test: test_nvme_ns_cmd_write_with_md ...passed 00:07:38.539 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:07:38.539 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:07:38.539 Test: test_nvme_ns_cmd_comparev ...passed 00:07:38.539 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:07:38.539 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:07:38.539 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:07:38.539 Test: test_nvme_ns_cmd_setup_request ...passed 00:07:38.539 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:07:38.539 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:07:38.539 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:07:38.539 Test: test_nvme_ns_cmd_verify ...passed 00:07:38.539 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:07:38.539 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:07:38.539 00:07:38.539 [2024-06-07 21:04:01.173238] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:38.539 [2024-06-07 21:04:01.173319] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:38.539 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.539 suites 1 1 n/a 0 0 00:07:38.539 tests 32 32 32 0 0 00:07:38.539 asserts 550 550 550 0 n/a 00:07:38.539 00:07:38.539 Elapsed time = 0.004 seconds 00:07:38.539 21:04:01 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:07:38.539 00:07:38.539 00:07:38.539 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.539 http://cunit.sourceforge.net/ 00:07:38.539 00:07:38.539 00:07:38.539 Suite: nvme_ns_cmd 00:07:38.539 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:07:38.539 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:07:38.539 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:07:38.539 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:07:38.539 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:07:38.539 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:07:38.539 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:07:38.539 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:07:38.539 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:07:38.539 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:07:38.539 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:07:38.539 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:07:38.539 00:07:38.539 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.539 suites 1 1 n/a 0 0 00:07:38.539 tests 12 12 12 0 0 00:07:38.539 asserts 123 123 123 0 n/a 00:07:38.539 00:07:38.539 Elapsed time = 0.002 seconds 00:07:38.798 21:04:01 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:07:38.798 00:07:38.798 00:07:38.798 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.798 http://cunit.sourceforge.net/ 00:07:38.798 00:07:38.798 00:07:38.798 Suite: nvme_qpair 00:07:38.798 Test: test3 ...passed 00:07:38.798 Test: test_ctrlr_failed ...passed 00:07:38.798 Test: struct_packing ...passed 00:07:38.798 Test: test_nvme_qpair_process_completions ...[2024-06-07 21:04:01.241466] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:38.798 passed 00:07:38.798 Test: test_nvme_completion_is_retry ...passed 00:07:38.798 Test: test_get_status_string ...passed 00:07:38.798 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:07:38.798 Test: test_nvme_qpair_submit_request ...passed 00:07:38.798 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:07:38.798 Test: test_nvme_qpair_manual_complete_request ...passed 00:07:38.798 Test: test_nvme_qpair_init_deinit ...passed 00:07:38.798 Test: test_nvme_get_sgl_print_info ...passed 00:07:38.798 00:07:38.798 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.798 suites 1 1 n/a 0 0 00:07:38.798 tests 12 12 12 0 0 00:07:38.798 asserts 154 154 154 0 n/a 00:07:38.798 00:07:38.798 Elapsed time = 0.001 seconds 00:07:38.798 [2024-06-07 21:04:01.241728] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:38.798 [2024-06-07 21:04:01.241801] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:38.798 [2024-06-07 21:04:01.241879] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:07:38.798 [2024-06-07 21:04:01.242267] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:38.798 21:04:01 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:07:38.798 00:07:38.798 00:07:38.798 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.798 http://cunit.sourceforge.net/ 00:07:38.798 00:07:38.798 00:07:38.798 Suite: nvme_pcie 00:07:38.798 Test: test_prp_list_append ...[2024-06-07 21:04:01.269418] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:38.798 passed 00:07:38.798 Test: test_nvme_pcie_hotplug_monitor ...passed 00:07:38.798 Test: test_shadow_doorbell_update ...passed 00:07:38.798 Test: test_build_contig_hw_sgl_request ...passed 00:07:38.798 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:07:38.798 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:07:38.798 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:07:38.798 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:07:38.798 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:07:38.798 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:07:38.798 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:07:38.798 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:07:38.798 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:07:38.798 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:07:38.798 00:07:38.798 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.798 suites 1 1 n/a 0 0 00:07:38.798 tests 14 14 14 0 0 00:07:38.798 asserts 235 235 235 0 n/a 00:07:38.798 00:07:38.798 Elapsed time = 0.001 seconds 00:07:38.798 [2024-06-07 21:04:01.270035] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:07:38.798 [2024-06-07 21:04:01.270089] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:07:38.798 [2024-06-07 21:04:01.270344] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:38.798 [2024-06-07 21:04:01.270427] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:38.798 [2024-06-07 21:04:01.270579] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:38.798 [2024-06-07 21:04:01.270647] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:07:38.798 [2024-06-07 21:04:01.270722] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:07:38.798 [2024-06-07 21:04:01.270761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:07:38.798 [2024-06-07 21:04:01.270798] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:07:38.799 21:04:01 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:07:38.799 00:07:38.799 00:07:38.799 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.799 http://cunit.sourceforge.net/ 00:07:38.799 00:07:38.799 00:07:38.799 Suite: nvme_ns_cmd 00:07:38.799 Test: nvme_poll_group_create_test ...passed 00:07:38.799 Test: nvme_poll_group_add_remove_test ...passed 00:07:38.799 Test: nvme_poll_group_process_completions ...passed 00:07:38.799 Test: nvme_poll_group_destroy_test ...passed 00:07:38.799 Test: nvme_poll_group_get_free_stats ...passed 00:07:38.799 00:07:38.799 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.799 suites 1 1 n/a 0 0 00:07:38.799 tests 5 5 5 0 0 00:07:38.799 asserts 75 75 75 0 n/a 00:07:38.799 00:07:38.799 Elapsed time = 0.000 seconds 00:07:38.799 21:04:01 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:07:38.799 00:07:38.799 00:07:38.799 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.799 http://cunit.sourceforge.net/ 00:07:38.799 00:07:38.799 00:07:38.799 Suite: nvme_quirks 00:07:38.799 Test: test_nvme_quirks_striping ...passed 00:07:38.799 00:07:38.799 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.799 suites 1 1 n/a 0 0 00:07:38.799 tests 1 1 1 0 0 00:07:38.799 asserts 5 5 5 0 n/a 00:07:38.799 00:07:38.799 Elapsed time = 0.000 seconds 00:07:38.799 21:04:01 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:07:38.799 00:07:38.799 00:07:38.799 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.799 http://cunit.sourceforge.net/ 00:07:38.799 00:07:38.799 00:07:38.799 Suite: nvme_tcp 00:07:38.799 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:07:38.799 Test: test_nvme_tcp_build_iovs ...passed 00:07:38.799 Test: test_nvme_tcp_build_sgl_request ...[2024-06-07 21:04:01.359337] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffd17557240, and the iovcnt=16, remaining_size=28672 00:07:38.799 passed 00:07:38.799 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:07:38.799 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:07:38.799 Test: test_nvme_tcp_req_complete_safe ...passed 00:07:38.799 Test: test_nvme_tcp_req_get ...passed 00:07:38.799 Test: test_nvme_tcp_req_init ...passed 00:07:38.799 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:07:38.799 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:07:38.799 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:07:38.799 Test: test_nvme_tcp_alloc_reqs ...passed 00:07:38.799 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:07:38.799 Test: test_nvme_tcp_pdu_ch_handle ...passed 00:07:38.799 Test: test_nvme_tcp_qpair_connect_sock ...passed 00:07:38.799 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:07:38.799 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:07:38.799 Test: test_nvme_tcp_icresp_handle ...passed 00:07:38.799 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:07:38.799 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:07:38.799 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:07:38.799 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...passed 00:07:38.799 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-06-07 21:04:01.360061] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd17558f60 is same with the state(6) to be set 00:07:38.799 [2024-06-07 21:04:01.360372] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd175580f0 is same with the state(5) to be set 00:07:38.799 [2024-06-07 21:04:01.360437] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffd17558c20 00:07:38.799 [2024-06-07 21:04:01.360480] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:07:38.799 [2024-06-07 21:04:01.360549] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd175585b0 is same with the state(5) to be set 00:07:38.799 [2024-06-07 21:04:01.360597] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:07:38.799 [2024-06-07 21:04:01.360698] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd175585b0 is same with the state(5) to be set 00:07:38.799 [2024-06-07 21:04:01.360749] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:07:38.799 [2024-06-07 21:04:01.360782] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd175585b0 is same with the state(5) to be set 00:07:38.799 [2024-06-07 21:04:01.360834] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd175585b0 is same with the state(5) to be set 00:07:38.799 [2024-06-07 21:04:01.360903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd175585b0 is same with the state(5) to be set 00:07:38.799 [2024-06-07 21:04:01.360957] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd175585b0 is same with the state(5) to be set 00:07:38.799 [2024-06-07 21:04:01.360993] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd175585b0 is same with the state(5) to be set 00:07:38.799 [2024-06-07 21:04:01.361030] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd175585b0 is same with the state(5) to be set 00:07:38.799 [2024-06-07 21:04:01.361182] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:07:38.799 [2024-06-07 21:04:01.361237] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:38.799 [2024-06-07 21:04:01.361452] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:38.799 [2024-06-07 21:04:01.361557] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd17558760): PDU Sequence Error 00:07:38.799 [2024-06-07 21:04:01.361665] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:07:38.799 [2024-06-07 21:04:01.361703] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:07:38.799 [2024-06-07 21:04:01.361733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd17558100 is same with the state(5) to be set 00:07:38.799 [2024-06-07 21:04:01.361764] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:07:38.799 [2024-06-07 21:04:01.361795] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd17558100 is same with the state(5) to be set 00:07:38.799 [2024-06-07 21:04:01.361836] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd17558100 is same with the state(0) to be set 00:07:38.799 [2024-06-07 21:04:01.361885] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd17558c20): PDU Sequence Error 00:07:38.799 [2024-06-07 21:04:01.361964] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffd175573e0 00:07:38.799 [2024-06-07 21:04:01.362161] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffd17556a60, errno=0, rc=0 00:07:38.799 [2024-06-07 21:04:01.362212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd17556a60 is same with the state(5) to be set 00:07:38.799 [2024-06-07 21:04:01.362274] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd17556a60 is same with the state(5) to be set 00:07:38.799 [2024-06-07 21:04:01.362317] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd17556a60 (0): Success 00:07:38.799 [2024-06-07 21:04:01.362363] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd17556a60 (0): Success 00:07:39.058 passed 00:07:39.058 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:07:39.058 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:07:39.058 Test: test_nvme_tcp_ctrlr_construct ...passed 00:07:39.058 Test: test_nvme_tcp_qpair_submit_request ...passed 00:07:39.058 00:07:39.058 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.058 suites 1 1 n/a 0 0 00:07:39.058 tests 27 27 27 0 0 00:07:39.058 asserts 624 624 624 0 n/a 00:07:39.058 00:07:39.058 Elapsed time = 0.116 seconds 00:07:39.058 [2024-06-07 21:04:01.474470] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:39.058 [2024-06-07 21:04:01.474584] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:39.058 [2024-06-07 21:04:01.474798] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:39.058 [2024-06-07 21:04:01.474835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:39.058 [2024-06-07 21:04:01.475047] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:39.058 [2024-06-07 21:04:01.475084] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:39.058 [2024-06-07 21:04:01.475176] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:07:39.058 [2024-06-07 21:04:01.475227] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:39.058 [2024-06-07 21:04:01.475334] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:07:39.058 [2024-06-07 21:04:01.475394] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:39.058 [2024-06-07 21:04:01.475523] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:07:39.058 [2024-06-07 21:04:01.475564] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:07:39.058 21:04:01 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:07:39.058 00:07:39.058 00:07:39.058 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.058 http://cunit.sourceforge.net/ 00:07:39.058 00:07:39.058 00:07:39.058 Suite: nvme_transport 00:07:39.058 Test: test_nvme_get_transport ...passed 00:07:39.058 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:07:39.058 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:07:39.058 Test: test_nvme_transport_poll_group_add_remove ...passed 00:07:39.058 Test: test_ctrlr_get_memory_domains ...passed 00:07:39.058 00:07:39.058 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.058 suites 1 1 n/a 0 0 00:07:39.058 tests 5 5 5 0 0 00:07:39.058 asserts 28 28 28 0 n/a 00:07:39.058 00:07:39.058 Elapsed time = 0.000 seconds 00:07:39.058 21:04:01 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:07:39.058 00:07:39.058 00:07:39.059 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.059 http://cunit.sourceforge.net/ 00:07:39.059 00:07:39.059 00:07:39.059 Suite: nvme_io_msg 00:07:39.059 Test: test_nvme_io_msg_send ...passed 00:07:39.059 Test: test_nvme_io_msg_process ...passed 00:07:39.059 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:07:39.059 00:07:39.059 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.059 suites 1 1 n/a 0 0 00:07:39.059 tests 3 3 3 0 0 00:07:39.059 asserts 56 56 56 0 n/a 00:07:39.059 00:07:39.059 Elapsed time = 0.000 seconds 00:07:39.059 21:04:01 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:07:39.059 00:07:39.059 00:07:39.059 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.059 http://cunit.sourceforge.net/ 00:07:39.059 00:07:39.059 00:07:39.059 Suite: nvme_pcie_common 00:07:39.059 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-06-07 21:04:01.580098] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:07:39.059 passed 00:07:39.059 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:07:39.059 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:07:39.059 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:07:39.059 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:07:39.059 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:07:39.059 00:07:39.059 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.059 suites 1 1 n/a 0 0 00:07:39.059 tests 6 6 6 0 0 00:07:39.059 asserts 148 148 148 0 n/a 00:07:39.059 00:07:39.059 Elapsed time = 0.002 seconds 00:07:39.059 [2024-06-07 21:04:01.581174] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:07:39.059 [2024-06-07 21:04:01.581305] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:07:39.059 [2024-06-07 21:04:01.581337] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:07:39.059 [2024-06-07 21:04:01.581687] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:39.059 [2024-06-07 21:04:01.581727] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:39.059 21:04:01 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:07:39.059 00:07:39.059 00:07:39.059 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.059 http://cunit.sourceforge.net/ 00:07:39.059 00:07:39.059 00:07:39.059 Suite: nvme_fabric 00:07:39.059 Test: test_nvme_fabric_prop_set_cmd ...passed 00:07:39.059 Test: test_nvme_fabric_prop_get_cmd ...passed 00:07:39.059 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:07:39.059 Test: test_nvme_fabric_discover_probe ...passed 00:07:39.059 Test: test_nvme_fabric_qpair_connect ...[2024-06-07 21:04:01.611319] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:07:39.059 passed 00:07:39.059 00:07:39.059 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.059 suites 1 1 n/a 0 0 00:07:39.059 tests 5 5 5 0 0 00:07:39.059 asserts 60 60 60 0 n/a 00:07:39.059 00:07:39.059 Elapsed time = 0.001 seconds 00:07:39.059 21:04:01 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:07:39.059 00:07:39.059 00:07:39.059 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.059 http://cunit.sourceforge.net/ 00:07:39.059 00:07:39.059 00:07:39.059 Suite: nvme_opal 00:07:39.059 Test: test_opal_nvme_security_recv_send_done ...passed 00:07:39.059 Test: test_opal_add_short_atom_header ...[2024-06-07 21:04:01.641300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:07:39.059 passed 00:07:39.059 00:07:39.059 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.059 suites 1 1 n/a 0 0 00:07:39.059 tests 2 2 2 0 0 00:07:39.059 asserts 22 22 22 0 n/a 00:07:39.059 00:07:39.059 Elapsed time = 0.001 seconds 00:07:39.059 00:07:39.059 real 0m1.216s 00:07:39.059 user 0m0.684s 00:07:39.059 sys 0m0.386s 00:07:39.059 21:04:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.059 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:07:39.059 ************************************ 00:07:39.059 END TEST unittest_nvme 00:07:39.059 ************************************ 00:07:39.059 21:04:01 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:39.059 21:04:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:39.059 21:04:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:39.059 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:07:39.059 ************************************ 00:07:39.059 START TEST unittest_log 00:07:39.059 ************************************ 00:07:39.059 21:04:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:39.059 00:07:39.059 00:07:39.059 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.059 http://cunit.sourceforge.net/ 00:07:39.059 00:07:39.059 00:07:39.059 Suite: log 00:07:39.059 Test: log_test ...[2024-06-07 21:04:01.723117] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:07:39.059 passed 00:07:39.059 Test: deprecation ...[2024-06-07 21:04:01.723374] log_ut.c: 55:log_test: *DEBUG*: log test 00:07:39.059 log dump test: 00:07:39.059 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:07:39.059 spdk dump test: 00:07:39.059 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:07:39.059 spdk dump test: 00:07:39.059 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:07:39.059 00000010 65 20 63 68 61 72 73 e chars 00:07:40.436 passed 00:07:40.436 00:07:40.436 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.436 suites 1 1 n/a 0 0 00:07:40.436 tests 2 2 2 0 0 00:07:40.436 asserts 73 73 73 0 n/a 00:07:40.436 00:07:40.436 Elapsed time = 0.001 seconds 00:07:40.436 00:07:40.436 real 0m1.031s 00:07:40.436 user 0m0.011s 00:07:40.436 sys 0m0.020s 00:07:40.436 21:04:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.436 ************************************ 00:07:40.436 END TEST unittest_log 00:07:40.436 ************************************ 00:07:40.436 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:40.436 21:04:02 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:40.436 21:04:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.436 21:04:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.436 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:40.436 ************************************ 00:07:40.436 START TEST unittest_lvol 00:07:40.436 ************************************ 00:07:40.436 21:04:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:40.436 00:07:40.436 00:07:40.436 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.436 http://cunit.sourceforge.net/ 00:07:40.436 00:07:40.436 00:07:40.436 Suite: lvol 00:07:40.437 Test: lvs_init_unload_success ...[2024-06-07 21:04:02.812295] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:07:40.437 passed 00:07:40.437 Test: lvs_init_destroy_success ...[2024-06-07 21:04:02.813862] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:07:40.437 passed 00:07:40.437 Test: lvs_init_opts_success ...passed 00:07:40.437 Test: lvs_unload_lvs_is_null_fail ...[2024-06-07 21:04:02.814766] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:07:40.437 passed 00:07:40.437 Test: lvs_names ...[2024-06-07 21:04:02.815266] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:07:40.437 [2024-06-07 21:04:02.815556] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:07:40.437 [2024-06-07 21:04:02.816001] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:07:40.437 passed 00:07:40.437 Test: lvol_create_destroy_success ...passed 00:07:40.437 Test: lvol_create_fail ...[2024-06-07 21:04:02.817319] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:07:40.437 [2024-06-07 21:04:02.817685] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:07:40.437 passed 00:07:40.437 Test: lvol_destroy_fail ...[2024-06-07 21:04:02.818499] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:07:40.437 passed 00:07:40.437 Test: lvol_close ...[2024-06-07 21:04:02.819166] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:07:40.437 [2024-06-07 21:04:02.819460] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:07:40.437 passed 00:07:40.437 Test: lvol_resize ...passed 00:07:40.437 Test: lvol_set_read_only ...passed 00:07:40.437 Test: test_lvs_load ...[2024-06-07 21:04:02.821259] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:07:40.437 [2024-06-07 21:04:02.821540] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:07:40.437 passed 00:07:40.437 Test: lvols_load ...[2024-06-07 21:04:02.822251] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:40.437 [2024-06-07 21:04:02.822616] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:40.437 passed 00:07:40.437 Test: lvol_open ...passed 00:07:40.437 Test: lvol_snapshot ...passed 00:07:40.437 Test: lvol_snapshot_fail ...[2024-06-07 21:04:02.824354] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:07:40.437 passed 00:07:40.437 Test: lvol_clone ...passed 00:07:40.437 Test: lvol_clone_fail ...[2024-06-07 21:04:02.825738] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:07:40.437 passed 00:07:40.437 Test: lvol_iter_clones ...passed 00:07:40.437 Test: lvol_refcnt ...[2024-06-07 21:04:02.826992] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 0d133144-f1ae-44fc-a0d4-57d8ff8c3692 because it is still open 00:07:40.437 passed 00:07:40.437 Test: lvol_names ...[2024-06-07 21:04:02.827691] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:40.437 [2024-06-07 21:04:02.828046] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:40.437 [2024-06-07 21:04:02.828581] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:07:40.437 passed 00:07:40.437 Test: lvol_create_thin_provisioned ...passed 00:07:40.437 Test: lvol_rename ...[2024-06-07 21:04:02.829809] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:40.437 [2024-06-07 21:04:02.830139] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:07:40.437 passed 00:07:40.437 Test: lvs_rename ...[2024-06-07 21:04:02.830846] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:07:40.437 passed 00:07:40.437 Test: lvol_inflate ...[2024-06-07 21:04:02.831550] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:40.437 passed 00:07:40.437 Test: lvol_decouple_parent ...[2024-06-07 21:04:02.832279] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:40.437 passed 00:07:40.437 Test: lvol_get_xattr ...passed 00:07:40.437 Test: lvol_esnap_reload ...passed 00:07:40.437 Test: lvol_esnap_create_bad_args ...[2024-06-07 21:04:02.833666] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:07:40.437 [2024-06-07 21:04:02.833930] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:40.437 [2024-06-07 21:04:02.834213] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:07:40.437 [2024-06-07 21:04:02.834606] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:40.437 [2024-06-07 21:04:02.835004] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:07:40.437 passed 00:07:40.437 Test: lvol_esnap_create_delete ...passed 00:07:40.437 Test: lvol_esnap_load_esnaps ...[2024-06-07 21:04:02.836030] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:07:40.437 passed 00:07:40.437 Test: lvol_esnap_missing ...[2024-06-07 21:04:02.836615] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:40.437 [2024-06-07 21:04:02.837018] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:40.437 passed 00:07:40.437 Test: lvol_esnap_hotplug ... 00:07:40.437 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:07:40.437 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:07:40.437 [2024-06-07 21:04:02.838528] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol a822d30a-5d3d-4965-a04f-32b74f3792f5: failed to create esnap bs_dev: error -12 00:07:40.437 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:07:40.437 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:07:40.437 [2024-06-07 21:04:02.839274] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 83d46233-9565-4e00-a845-ad616cb58efd: failed to create esnap bs_dev: error -12 00:07:40.437 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:07:40.437 [2024-06-07 21:04:02.839793] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol da37e6fd-2b83-4ec9-a29f-d528bb49b306: failed to create esnap bs_dev: error -12 00:07:40.437 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:07:40.437 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:07:40.437 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:07:40.437 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:07:40.437 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:07:40.437 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:07:40.437 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:07:40.437 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:07:40.437 passed 00:07:40.437 Test: lvol_get_by ...passed 00:07:40.437 00:07:40.437 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.437 suites 1 1 n/a 0 0 00:07:40.437 tests 34 34 34 0 0 00:07:40.437 asserts 1439 1439 1439 0 n/a 00:07:40.437 00:07:40.437 Elapsed time = 0.015 seconds 00:07:40.437 00:07:40.437 real 0m0.068s 00:07:40.437 user 0m0.020s 00:07:40.437 sys 0m0.033s 00:07:40.437 21:04:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.437 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:40.437 ************************************ 00:07:40.437 END TEST unittest_lvol 00:07:40.437 ************************************ 00:07:40.437 21:04:02 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:40.437 21:04:02 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:40.437 21:04:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.437 21:04:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.437 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:40.437 ************************************ 00:07:40.437 START TEST unittest_nvme_rdma 00:07:40.437 ************************************ 00:07:40.437 21:04:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:40.437 00:07:40.437 00:07:40.437 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.437 http://cunit.sourceforge.net/ 00:07:40.437 00:07:40.437 00:07:40.437 Suite: nvme_rdma 00:07:40.437 Test: test_nvme_rdma_build_sgl_request ...[2024-06-07 21:04:02.934937] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:07:40.437 passed 00:07:40.437 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:07:40.437 Test: test_nvme_rdma_build_contig_request ...passed 00:07:40.437 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:07:40.437 Test: test_nvme_rdma_create_reqs ...passed 00:07:40.437 Test: test_nvme_rdma_create_rsps ...passed 00:07:40.437 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-06-07 21:04:02.935463] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:40.437 [2024-06-07 21:04:02.935614] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:07:40.437 [2024-06-07 21:04:02.935739] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:40.437 [2024-06-07 21:04:02.935901] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:07:40.437 [2024-06-07 21:04:02.936367] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:07:40.437 passed 00:07:40.437 Test: test_nvme_rdma_poller_create ...passed 00:07:40.438 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-06-07 21:04:02.936671] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:40.438 [2024-06-07 21:04:02.936764] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:40.438 [2024-06-07 21:04:02.937010] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:07:40.438 passed 00:07:40.438 Test: test_nvme_rdma_ctrlr_construct ...passed 00:07:40.438 Test: test_nvme_rdma_req_put_and_get ...passed 00:07:40.438 Test: test_nvme_rdma_req_init ...passed 00:07:40.438 Test: test_nvme_rdma_validate_cm_event ...passed 00:07:40.438 Test: test_nvme_rdma_qpair_init ...passed 00:07:40.438 Test: test_nvme_rdma_qpair_submit_request ...passed 00:07:40.438 Test: test_nvme_rdma_memory_domain ...passed 00:07:40.438 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:07:40.438 Test: test_rdma_get_memory_translation ...[2024-06-07 21:04:02.937458] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:07:40.438 [2024-06-07 21:04:02.937526] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:07:40.438 [2024-06-07 21:04:02.937804] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:07:40.438 [2024-06-07 21:04:02.937958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:07:40.438 passed 00:07:40.438 Test: test_get_rdma_qpair_from_wc ...passed 00:07:40.438 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:07:40.438 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:07:40.438 Test: test_nvme_rdma_qpair_set_poller ...passed 00:07:40.438 00:07:40.438 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.438 suites 1 1 n/a 0 0 00:07:40.438 tests 22 22 22 0 0 00:07:40.438 asserts 412 412 412 0 n/a 00:07:40.438 00:07:40.438 Elapsed time = 0.004 seconds 00:07:40.438 [2024-06-07 21:04:02.938058] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:07:40.438 [2024-06-07 21:04:02.938196] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:40.438 [2024-06-07 21:04:02.938265] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:40.438 [2024-06-07 21:04:02.938451] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:40.438 [2024-06-07 21:04:02.938519] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:07:40.438 [2024-06-07 21:04:02.938566] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffd9f267460 on poll group 0x60b0000001a0 00:07:40.438 [2024-06-07 21:04:02.938654] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:40.438 [2024-06-07 21:04:02.938718] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:07:40.438 [2024-06-07 21:04:02.938758] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffd9f267460 on poll group 0x60b0000001a0 00:07:40.438 [2024-06-07 21:04:02.938851] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:40.438 ************************************ 00:07:40.438 END TEST unittest_nvme_rdma 00:07:40.438 ************************************ 00:07:40.438 00:07:40.438 real 0m0.034s 00:07:40.438 user 0m0.015s 00:07:40.438 sys 0m0.018s 00:07:40.438 21:04:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.438 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:40.438 21:04:02 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:40.438 21:04:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.438 21:04:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.438 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:40.438 ************************************ 00:07:40.438 START TEST unittest_nvmf_transport 00:07:40.438 ************************************ 00:07:40.438 21:04:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:40.438 00:07:40.438 00:07:40.438 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.438 http://cunit.sourceforge.net/ 00:07:40.438 00:07:40.438 00:07:40.438 Suite: nvmf 00:07:40.438 Test: test_spdk_nvmf_transport_create ...[2024-06-07 21:04:03.017073] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:07:40.438 [2024-06-07 21:04:03.017588] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:07:40.438 [2024-06-07 21:04:03.017762] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:07:40.438 [2024-06-07 21:04:03.018011] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:07:40.438 passed 00:07:40.438 Test: test_nvmf_transport_poll_group_create ...passed 00:07:40.438 Test: test_spdk_nvmf_transport_opts_init ...[2024-06-07 21:04:03.018755] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:07:40.438 [2024-06-07 21:04:03.018970] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:07:40.438 [2024-06-07 21:04:03.019096] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:07:40.438 passed 00:07:40.438 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:07:40.438 00:07:40.438 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.438 suites 1 1 n/a 0 0 00:07:40.438 tests 4 4 4 0 0 00:07:40.438 asserts 49 49 49 0 n/a 00:07:40.438 00:07:40.438 Elapsed time = 0.002 seconds 00:07:40.438 00:07:40.438 real 0m0.036s 00:07:40.438 user 0m0.014s 00:07:40.438 sys 0m0.021s 00:07:40.438 21:04:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.438 ************************************ 00:07:40.438 21:04:03 -- common/autotest_common.sh@10 -- # set +x 00:07:40.438 END TEST unittest_nvmf_transport 00:07:40.438 ************************************ 00:07:40.438 21:04:03 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:40.438 21:04:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.438 21:04:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.438 21:04:03 -- common/autotest_common.sh@10 -- # set +x 00:07:40.438 ************************************ 00:07:40.438 START TEST unittest_rdma 00:07:40.438 ************************************ 00:07:40.438 21:04:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:40.438 00:07:40.438 00:07:40.438 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.438 http://cunit.sourceforge.net/ 00:07:40.438 00:07:40.438 00:07:40.438 Suite: rdma_common 00:07:40.438 Test: test_spdk_rdma_pd ...[2024-06-07 21:04:03.104776] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:40.438 [2024-06-07 21:04:03.105305] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:40.438 passed 00:07:40.438 00:07:40.438 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.438 suites 1 1 n/a 0 0 00:07:40.438 tests 1 1 1 0 0 00:07:40.438 asserts 31 31 31 0 n/a 00:07:40.438 00:07:40.438 Elapsed time = 0.001 seconds 00:07:40.697 00:07:40.697 real 0m0.030s 00:07:40.697 user 0m0.015s 00:07:40.697 sys 0m0.014s 00:07:40.697 21:04:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.697 ************************************ 00:07:40.697 END TEST unittest_rdma 00:07:40.697 ************************************ 00:07:40.697 21:04:03 -- common/autotest_common.sh@10 -- # set +x 00:07:40.697 21:04:03 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:40.697 21:04:03 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:40.697 21:04:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.697 21:04:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.697 21:04:03 -- common/autotest_common.sh@10 -- # set +x 00:07:40.697 ************************************ 00:07:40.697 START TEST unittest_nvme_cuse 00:07:40.697 ************************************ 00:07:40.697 21:04:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:40.697 00:07:40.697 00:07:40.697 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.697 http://cunit.sourceforge.net/ 00:07:40.697 00:07:40.697 00:07:40.697 Suite: nvme_cuse 00:07:40.697 Test: test_cuse_nvme_submit_io_read_write ...passed 00:07:40.697 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:07:40.697 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:07:40.697 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:07:40.697 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:07:40.697 Test: test_cuse_nvme_submit_io ...[2024-06-07 21:04:03.193939] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:07:40.697 passed 00:07:40.697 Test: test_cuse_nvme_reset ...[2024-06-07 21:04:03.194298] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:07:40.697 passed 00:07:40.697 Test: test_nvme_cuse_stop ...passed 00:07:40.697 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:07:40.697 00:07:40.697 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.697 suites 1 1 n/a 0 0 00:07:40.697 tests 9 9 9 0 0 00:07:40.697 asserts 121 121 121 0 n/a 00:07:40.697 00:07:40.697 Elapsed time = 0.002 seconds 00:07:40.697 00:07:40.697 real 0m0.035s 00:07:40.697 user 0m0.023s 00:07:40.697 sys 0m0.012s 00:07:40.697 21:04:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.697 21:04:03 -- common/autotest_common.sh@10 -- # set +x 00:07:40.697 ************************************ 00:07:40.697 END TEST unittest_nvme_cuse 00:07:40.697 ************************************ 00:07:40.697 21:04:03 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:07:40.697 21:04:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.698 21:04:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.698 21:04:03 -- common/autotest_common.sh@10 -- # set +x 00:07:40.698 ************************************ 00:07:40.698 START TEST unittest_nvmf 00:07:40.698 ************************************ 00:07:40.698 21:04:03 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:07:40.698 21:04:03 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:07:40.698 00:07:40.698 00:07:40.698 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.698 http://cunit.sourceforge.net/ 00:07:40.698 00:07:40.698 00:07:40.698 Suite: nvmf 00:07:40.698 Test: test_get_log_page ...[2024-06-07 21:04:03.284034] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:07:40.698 passed 00:07:40.698 Test: test_process_fabrics_cmd ...passed 00:07:40.698 Test: test_connect ...[2024-06-07 21:04:03.285021] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:07:40.698 [2024-06-07 21:04:03.285142] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:07:40.698 [2024-06-07 21:04:03.285182] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:07:40.698 [2024-06-07 21:04:03.285226] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:07:40.698 [2024-06-07 21:04:03.285322] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:07:40.698 [2024-06-07 21:04:03.285351] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:07:40.698 [2024-06-07 21:04:03.285446] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:07:40.698 [2024-06-07 21:04:03.285483] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:07:40.698 [2024-06-07 21:04:03.285596] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:07:40.698 [2024-06-07 21:04:03.285673] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:07:40.698 [2024-06-07 21:04:03.285971] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:07:40.698 [2024-06-07 21:04:03.286046] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:07:40.698 [2024-06-07 21:04:03.286138] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:07:40.698 [2024-06-07 21:04:03.286208] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:07:40.698 [2024-06-07 21:04:03.286324] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:07:40.698 [2024-06-07 21:04:03.286492] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:07:40.698 passed 00:07:40.698 Test: test_get_ns_id_desc_list ...passed 00:07:40.698 Test: test_identify_ns ...[2024-06-07 21:04:03.286732] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:40.698 [2024-06-07 21:04:03.286936] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:07:40.698 [2024-06-07 21:04:03.287075] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:07:40.698 passed 00:07:40.698 Test: test_identify_ns_iocs_specific ...[2024-06-07 21:04:03.287217] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:40.698 [2024-06-07 21:04:03.287505] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:40.698 passed 00:07:40.698 Test: test_reservation_write_exclusive ...passed 00:07:40.698 Test: test_reservation_exclusive_access ...passed 00:07:40.698 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:07:40.698 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:07:40.698 Test: test_reservation_notification_log_page ...passed 00:07:40.698 Test: test_get_dif_ctx ...passed 00:07:40.698 Test: test_set_get_features ...[2024-06-07 21:04:03.288053] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:40.698 [2024-06-07 21:04:03.288099] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:40.698 [2024-06-07 21:04:03.288141] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:07:40.698 [2024-06-07 21:04:03.288192] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:07:40.698 passed 00:07:40.698 Test: test_identify_ctrlr ...passed 00:07:40.698 Test: test_identify_ctrlr_iocs_specific ...passed 00:07:40.698 Test: test_custom_admin_cmd ...passed 00:07:40.698 Test: test_fused_compare_and_write ...[2024-06-07 21:04:03.288650] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:07:40.698 [2024-06-07 21:04:03.288695] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:40.698 [2024-06-07 21:04:03.288734] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:40.698 passed 00:07:40.698 Test: test_multi_async_event_reqs ...passed 00:07:40.698 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:07:40.698 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:07:40.698 Test: test_multi_async_events ...passed 00:07:40.698 Test: test_rae ...passed 00:07:40.698 Test: test_nvmf_ctrlr_create_destruct ...passed 00:07:40.698 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:07:40.698 Test: test_spdk_nvmf_request_zcopy_start ...[2024-06-07 21:04:03.289293] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:07:40.698 passed 00:07:40.698 Test: test_zcopy_read ...passed 00:07:40.698 Test: test_zcopy_write ...passed 00:07:40.698 Test: test_nvmf_property_set ...passed 00:07:40.698 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:07:40.698 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-06-07 21:04:03.289458] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:40.698 [2024-06-07 21:04:03.289549] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:40.698 [2024-06-07 21:04:03.289589] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:07:40.698 [2024-06-07 21:04:03.289631] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:07:40.698 [2024-06-07 21:04:03.289658] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:07:40.698 passed 00:07:40.698 00:07:40.698 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.698 suites 1 1 n/a 0 0 00:07:40.698 tests 30 30 30 0 0 00:07:40.698 asserts 885 885 885 0 n/a 00:07:40.698 00:07:40.698 Elapsed time = 0.006 seconds 00:07:40.698 21:04:03 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:07:40.698 00:07:40.698 00:07:40.698 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.698 http://cunit.sourceforge.net/ 00:07:40.698 00:07:40.698 00:07:40.698 Suite: nvmf 00:07:40.698 Test: test_get_rw_params ...passed 00:07:40.698 Test: test_lba_in_range ...passed 00:07:40.698 Test: test_get_dif_ctx ...passed 00:07:40.698 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:07:40.698 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-06-07 21:04:03.327011] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:07:40.698 [2024-06-07 21:04:03.327285] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:07:40.698 passed 00:07:40.698 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:07:40.698 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-06-07 21:04:03.327386] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:07:40.698 [2024-06-07 21:04:03.327435] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:07:40.698 [2024-06-07 21:04:03.327509] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:07:40.698 [2024-06-07 21:04:03.327608] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:07:40.698 [2024-06-07 21:04:03.327635] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:07:40.698 passed 00:07:40.698 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:07:40.698 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:07:40.698 00:07:40.698 [2024-06-07 21:04:03.327688] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:07:40.698 [2024-06-07 21:04:03.327730] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:07:40.698 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.698 suites 1 1 n/a 0 0 00:07:40.698 tests 9 9 9 0 0 00:07:40.698 asserts 157 157 157 0 n/a 00:07:40.698 00:07:40.698 Elapsed time = 0.001 seconds 00:07:40.698 21:04:03 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:07:40.698 00:07:40.698 00:07:40.698 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.698 http://cunit.sourceforge.net/ 00:07:40.698 00:07:40.698 00:07:40.698 Suite: nvmf 00:07:40.698 Test: test_discovery_log ...passed 00:07:40.698 Test: test_discovery_log_with_filters ...passed 00:07:40.698 00:07:40.698 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.698 suites 1 1 n/a 0 0 00:07:40.698 tests 2 2 2 0 0 00:07:40.698 asserts 238 238 238 0 n/a 00:07:40.698 00:07:40.698 Elapsed time = 0.003 seconds 00:07:40.958 21:04:03 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:07:40.958 00:07:40.958 00:07:40.958 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.958 http://cunit.sourceforge.net/ 00:07:40.958 00:07:40.958 00:07:40.958 Suite: nvmf 00:07:40.958 Test: nvmf_test_create_subsystem ...[2024-06-07 21:04:03.411196] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:07:40.958 [2024-06-07 21:04:03.411635] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:07:40.958 [2024-06-07 21:04:03.411745] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:07:40.958 [2024-06-07 21:04:03.411781] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:07:40.958 [2024-06-07 21:04:03.411807] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:07:40.958 [2024-06-07 21:04:03.411843] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:07:40.958 [2024-06-07 21:04:03.411954] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:07:40.958 [2024-06-07 21:04:03.412148] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:07:40.958 [2024-06-07 21:04:03.412253] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:07:40.958 passed 00:07:40.958 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-06-07 21:04:03.412296] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:40.958 [2024-06-07 21:04:03.412321] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:40.958 [2024-06-07 21:04:03.412505] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:07:40.958 [2024-06-07 21:04:03.412613] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:07:40.958 passed 00:07:40.958 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:07:40.958 Test: test_reservation_register ...[2024-06-07 21:04:03.412954] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:40.958 [2024-06-07 21:04:03.413123] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:07:40.958 passed 00:07:40.958 Test: test_reservation_register_with_ptpl ...passed 00:07:40.958 Test: test_reservation_acquire_preempt_1 ...[2024-06-07 21:04:03.414264] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:40.958 passed 00:07:40.958 Test: test_reservation_acquire_release_with_ptpl ...passed 00:07:40.958 Test: test_reservation_release ...[2024-06-07 21:04:03.416009] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:40.958 passed 00:07:40.958 Test: test_reservation_unregister_notification ...[2024-06-07 21:04:03.416238] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:40.958 passed 00:07:40.958 Test: test_reservation_release_notification ...[2024-06-07 21:04:03.416530] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:40.958 passed 00:07:40.958 Test: test_reservation_release_notification_write_exclusive ...[2024-06-07 21:04:03.416801] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:40.958 passed 00:07:40.958 Test: test_reservation_clear_notification ...[2024-06-07 21:04:03.417076] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:40.958 passed 00:07:40.958 Test: test_reservation_preempt_notification ...[2024-06-07 21:04:03.417332] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:40.958 passed 00:07:40.958 Test: test_spdk_nvmf_ns_event ...passed 00:07:40.958 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:07:40.959 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:07:40.959 Test: test_spdk_nvmf_subsystem_add_host ...[2024-06-07 21:04:03.418167] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:07:40.959 [2024-06-07 21:04:03.418270] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:07:40.959 passed 00:07:40.959 Test: test_nvmf_ns_reservation_report ...[2024-06-07 21:04:03.418411] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:07:40.959 passed 00:07:40.959 Test: test_nvmf_nqn_is_valid ...[2024-06-07 21:04:03.418503] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:07:40.959 [2024-06-07 21:04:03.418539] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:928b7509-7b03-46cc-bab8-f0d391e8708": uuid is not the correct length 00:07:40.959 passed 00:07:40.959 Test: test_nvmf_ns_reservation_restore ...passed 00:07:40.959 Test: test_nvmf_subsystem_state_change ...[2024-06-07 21:04:03.418568] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:07:40.959 [2024-06-07 21:04:03.418679] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:07:40.959 passed 00:07:40.959 Test: test_nvmf_reservation_custom_ops ...passed 00:07:40.959 00:07:40.959 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.959 suites 1 1 n/a 0 0 00:07:40.959 tests 22 22 22 0 0 00:07:40.959 asserts 407 407 407 0 n/a 00:07:40.959 00:07:40.959 Elapsed time = 0.009 seconds 00:07:40.959 21:04:03 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:07:40.959 00:07:40.959 00:07:40.959 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.959 http://cunit.sourceforge.net/ 00:07:40.959 00:07:40.959 00:07:40.959 Suite: nvmf 00:07:40.959 Test: test_nvmf_tcp_create ...[2024-06-07 21:04:03.485551] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:07:40.959 passed 00:07:40.959 Test: test_nvmf_tcp_destroy ...passed 00:07:40.959 Test: test_nvmf_tcp_poll_group_create ...passed 00:07:40.959 Test: test_nvmf_tcp_send_c2h_data ...passed 00:07:40.959 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:07:40.959 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:07:40.959 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:07:40.959 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-06-07 21:04:03.598130] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.598240] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f4e80 is same with the state(5) to be set 00:07:40.959 [2024-06-07 21:04:03.598347] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f4e80 is same with the state(5) to be set 00:07:40.959 [2024-06-07 21:04:03.598388] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.598417] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f4e80 is same with the state(5) to be set 00:07:40.959 passed 00:07:40.959 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:07:40.959 Test: test_nvmf_tcp_icreq_handle ...[2024-06-07 21:04:03.598510] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:40.959 [2024-06-07 21:04:03.598601] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.598666] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f4e80 is same with the state(5) to be set 00:07:40.959 [2024-06-07 21:04:03.598710] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:40.959 [2024-06-07 21:04:03.598750] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f4e80 is same with the state(5) to be set 00:07:40.959 [2024-06-07 21:04:03.598775] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.598807] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f4e80 is same with the state(5) to be set 00:07:40.959 passed 00:07:40.959 Test: test_nvmf_tcp_check_xfer_type ...[2024-06-07 21:04:03.598852] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.598912] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f4e80 is same with the state(5) to be set 00:07:40.959 passed 00:07:40.959 Test: test_nvmf_tcp_invalid_sgl ...[2024-06-07 21:04:03.598998] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:07:40.959 [2024-06-07 21:04:03.599050] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.599084] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f4e80 is same with the state(5) to be set 00:07:40.959 passed 00:07:40.959 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-06-07 21:04:03.599137] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffcc01f5be0 00:07:40.959 [2024-06-07 21:04:03.599223] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.599285] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f5340 is same with the state(5) to be set 00:07:40.959 [2024-06-07 21:04:03.599327] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffcc01f5340 00:07:40.959 [2024-06-07 21:04:03.599354] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.599384] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f5340 is same with the state(5) to be set 00:07:40.959 [2024-06-07 21:04:03.599415] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:07:40.959 [2024-06-07 21:04:03.599449] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.599489] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f5340 is same with the state(5) to be set 00:07:40.959 [2024-06-07 21:04:03.599535] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:07:40.959 [2024-06-07 21:04:03.599566] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.599597] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f5340 is same with the state(5) to be set 00:07:40.959 [2024-06-07 21:04:03.599631] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.599662] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f5340 is same with the state(5) to be set 00:07:40.959 [2024-06-07 21:04:03.599730] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.599761] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f5340 is same with the state(5) to be set 00:07:40.959 [2024-06-07 21:04:03.599807] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.599832] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f5340 is same with the state(5) to be set 00:07:40.959 [2024-06-07 21:04:03.599868] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.599892] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f5340 is same with the state(5) to be set 00:07:40.959 [2024-06-07 21:04:03.599945] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.599976] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f5340 is same with the state(5) to be set 00:07:40.959 passed 00:07:40.959 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-06-07 21:04:03.600014] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:40.959 [2024-06-07 21:04:03.600037] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc01f5340 is same with the state(5) to be set 00:07:40.959 passed 00:07:40.959 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-06-07 21:04:03.619407] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:07:40.959 passed 00:07:40.959 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-06-07 21:04:03.619466] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:07:40.959 [2024-06-07 21:04:03.619699] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:07:40.959 [2024-06-07 21:04:03.619738] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:07:40.959 passed 00:07:40.959 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-06-07 21:04:03.619881] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:07:40.959 [2024-06-07 21:04:03.619909] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:07:40.959 passed 00:07:40.959 00:07:40.959 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.959 suites 1 1 n/a 0 0 00:07:40.959 tests 17 17 17 0 0 00:07:40.959 asserts 222 222 222 0 n/a 00:07:40.959 00:07:40.959 Elapsed time = 0.160 seconds 00:07:41.219 21:04:03 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:07:41.219 00:07:41.219 00:07:41.219 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.219 http://cunit.sourceforge.net/ 00:07:41.219 00:07:41.219 00:07:41.219 Suite: nvmf 00:07:41.219 Test: test_nvmf_tgt_create_poll_group ...passed 00:07:41.219 00:07:41.219 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.219 suites 1 1 n/a 0 0 00:07:41.219 tests 1 1 1 0 0 00:07:41.219 asserts 17 17 17 0 n/a 00:07:41.219 00:07:41.219 Elapsed time = 0.024 seconds 00:07:41.219 00:07:41.219 real 0m0.509s 00:07:41.219 user 0m0.252s 00:07:41.219 sys 0m0.259s 00:07:41.219 21:04:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.219 21:04:03 -- common/autotest_common.sh@10 -- # set +x 00:07:41.219 ************************************ 00:07:41.219 END TEST unittest_nvmf 00:07:41.219 ************************************ 00:07:41.219 21:04:03 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:41.219 21:04:03 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:41.219 21:04:03 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:41.219 21:04:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:41.219 21:04:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.219 21:04:03 -- common/autotest_common.sh@10 -- # set +x 00:07:41.219 ************************************ 00:07:41.219 START TEST unittest_nvmf_rdma 00:07:41.219 ************************************ 00:07:41.219 21:04:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:41.219 00:07:41.219 00:07:41.219 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.219 http://cunit.sourceforge.net/ 00:07:41.219 00:07:41.219 00:07:41.219 Suite: nvmf 00:07:41.219 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-06-07 21:04:03.850307] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:07:41.219 [2024-06-07 21:04:03.850786] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:07:41.219 [2024-06-07 21:04:03.850951] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:07:41.219 passed 00:07:41.219 Test: test_spdk_nvmf_rdma_request_process ...passed 00:07:41.219 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:07:41.219 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:07:41.219 Test: test_nvmf_rdma_opts_init ...passed 00:07:41.219 Test: test_nvmf_rdma_request_free_data ...passed 00:07:41.219 Test: test_nvmf_rdma_update_ibv_state ...[2024-06-07 21:04:03.853670] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:07:41.219 [2024-06-07 21:04:03.853825] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:07:41.219 passed 00:07:41.219 Test: test_nvmf_rdma_resources_create ...passed 00:07:41.219 Test: test_nvmf_rdma_qpair_compare ...passed 00:07:41.219 Test: test_nvmf_rdma_resize_cq ...[2024-06-07 21:04:03.855857] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:07:41.219 Using CQ of insufficient size may lead to CQ overrun 00:07:41.219 [2024-06-07 21:04:03.856094] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:07:41.219 [2024-06-07 21:04:03.856271] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:41.219 passed 00:07:41.219 00:07:41.219 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.219 suites 1 1 n/a 0 0 00:07:41.219 tests 10 10 10 0 0 00:07:41.219 asserts 584 584 584 0 n/a 00:07:41.219 00:07:41.219 Elapsed time = 0.004 seconds 00:07:41.219 00:07:41.219 real 0m0.045s 00:07:41.219 user 0m0.021s 00:07:41.219 sys 0m0.021s 00:07:41.219 21:04:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.219 21:04:03 -- common/autotest_common.sh@10 -- # set +x 00:07:41.219 ************************************ 00:07:41.219 END TEST unittest_nvmf_rdma 00:07:41.219 ************************************ 00:07:41.479 21:04:03 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:41.479 21:04:03 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:07:41.479 21:04:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:41.479 21:04:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.479 21:04:03 -- common/autotest_common.sh@10 -- # set +x 00:07:41.479 ************************************ 00:07:41.479 START TEST unittest_scsi 00:07:41.479 ************************************ 00:07:41.479 21:04:03 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:07:41.479 21:04:03 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:07:41.479 00:07:41.479 00:07:41.479 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.479 http://cunit.sourceforge.net/ 00:07:41.479 00:07:41.479 00:07:41.479 Suite: dev_suite 00:07:41.479 Test: dev_destruct_null_dev ...passed 00:07:41.479 Test: dev_destruct_zero_luns ...passed 00:07:41.479 Test: dev_destruct_null_lun ...passed 00:07:41.479 Test: dev_destruct_success ...passed 00:07:41.479 Test: dev_construct_num_luns_zero ...[2024-06-07 21:04:03.950822] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:07:41.479 passed 00:07:41.479 Test: dev_construct_no_lun_zero ...[2024-06-07 21:04:03.951263] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:07:41.479 passed 00:07:41.479 Test: dev_construct_null_lun ...passed 00:07:41.479 Test: dev_construct_name_too_long ...[2024-06-07 21:04:03.951316] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:07:41.479 [2024-06-07 21:04:03.951355] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:07:41.479 passed 00:07:41.479 Test: dev_construct_success ...passed 00:07:41.479 Test: dev_construct_success_lun_zero_not_first ...passed 00:07:41.479 Test: dev_queue_mgmt_task_success ...passed 00:07:41.479 Test: dev_queue_task_success ...passed 00:07:41.479 Test: dev_stop_success ...passed 00:07:41.479 Test: dev_add_port_max_ports ...[2024-06-07 21:04:03.951688] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:07:41.479 passed 00:07:41.479 Test: dev_add_port_construct_failure1 ...[2024-06-07 21:04:03.951780] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:07:41.479 passed 00:07:41.479 Test: dev_add_port_construct_failure2 ...passed 00:07:41.479 Test: dev_add_port_success1 ...passed 00:07:41.479 Test: dev_add_port_success2 ...passed 00:07:41.479 Test: dev_add_port_success3 ...passed 00:07:41.479 Test: dev_find_port_by_id_num_ports_zero ...passed 00:07:41.479 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:07:41.479 Test: dev_find_port_by_id_success ...passed 00:07:41.479 Test: dev_add_lun_bdev_not_found ...passed 00:07:41.479 Test: dev_add_lun_no_free_lun_id ...[2024-06-07 21:04:03.951865] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:07:41.479 [2024-06-07 21:04:03.952333] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:07:41.479 passed 00:07:41.479 Test: dev_add_lun_success1 ...passed 00:07:41.479 Test: dev_add_lun_success2 ...passed 00:07:41.479 Test: dev_check_pending_tasks ...passed 00:07:41.479 Test: dev_iterate_luns ...passed 00:07:41.479 Test: dev_find_free_lun ...passed 00:07:41.479 00:07:41.479 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.479 suites 1 1 n/a 0 0 00:07:41.479 tests 29 29 29 0 0 00:07:41.479 asserts 97 97 97 0 n/a 00:07:41.479 00:07:41.479 Elapsed time = 0.002 seconds 00:07:41.479 21:04:03 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:07:41.479 00:07:41.479 00:07:41.479 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.479 http://cunit.sourceforge.net/ 00:07:41.479 00:07:41.479 00:07:41.479 Suite: lun_suite 00:07:41.479 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-06-07 21:04:03.990317] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:07:41.479 passed 00:07:41.479 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-06-07 21:04:03.990970] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:07:41.479 passed 00:07:41.479 Test: lun_task_mgmt_execute_lun_reset ...passed 00:07:41.479 Test: lun_task_mgmt_execute_target_reset ...passed 00:07:41.479 Test: lun_task_mgmt_execute_invalid_case ...[2024-06-07 21:04:03.991753] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:07:41.479 passed 00:07:41.479 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:07:41.479 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:07:41.479 Test: lun_append_task_null_lun_not_supported ...passed 00:07:41.479 Test: lun_execute_scsi_task_pending ...passed 00:07:41.479 Test: lun_execute_scsi_task_complete ...passed 00:07:41.479 Test: lun_execute_scsi_task_resize ...passed 00:07:41.479 Test: lun_destruct_success ...passed 00:07:41.479 Test: lun_construct_null_ctx ...[2024-06-07 21:04:03.993215] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:07:41.479 passed 00:07:41.479 Test: lun_construct_success ...passed 00:07:41.479 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:07:41.479 Test: lun_reset_task_suspend_scsi_task ...passed 00:07:41.479 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:07:41.479 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:07:41.479 00:07:41.479 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.479 suites 1 1 n/a 0 0 00:07:41.479 tests 18 18 18 0 0 00:07:41.479 asserts 153 153 153 0 n/a 00:07:41.479 00:07:41.479 Elapsed time = 0.002 seconds 00:07:41.479 21:04:04 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:07:41.479 00:07:41.479 00:07:41.479 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.479 http://cunit.sourceforge.net/ 00:07:41.480 00:07:41.480 00:07:41.480 Suite: scsi_suite 00:07:41.480 Test: scsi_init ...passed 00:07:41.480 00:07:41.480 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.480 suites 1 1 n/a 0 0 00:07:41.480 tests 1 1 1 0 0 00:07:41.480 asserts 1 1 1 0 n/a 00:07:41.480 00:07:41.480 Elapsed time = 0.000 seconds 00:07:41.480 21:04:04 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:07:41.480 00:07:41.480 00:07:41.480 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.480 http://cunit.sourceforge.net/ 00:07:41.480 00:07:41.480 00:07:41.480 Suite: translation_suite 00:07:41.480 Test: mode_select_6_test ...passed 00:07:41.480 Test: mode_select_6_test2 ...passed 00:07:41.480 Test: mode_sense_6_test ...passed 00:07:41.480 Test: mode_sense_10_test ...passed 00:07:41.480 Test: inquiry_evpd_test ...passed 00:07:41.480 Test: inquiry_standard_test ...passed 00:07:41.480 Test: inquiry_overflow_test ...passed 00:07:41.480 Test: task_complete_test ...passed 00:07:41.480 Test: lba_range_test ...passed 00:07:41.480 Test: xfer_len_test ...[2024-06-07 21:04:04.057449] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:07:41.480 passed 00:07:41.480 Test: xfer_test ...passed 00:07:41.480 Test: scsi_name_padding_test ...passed 00:07:41.480 Test: get_dif_ctx_test ...passed 00:07:41.480 Test: unmap_split_test ...passed 00:07:41.480 00:07:41.480 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.480 suites 1 1 n/a 0 0 00:07:41.480 tests 14 14 14 0 0 00:07:41.480 asserts 1200 1200 1200 0 n/a 00:07:41.480 00:07:41.480 Elapsed time = 0.004 seconds 00:07:41.480 21:04:04 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:07:41.480 00:07:41.480 00:07:41.480 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.480 http://cunit.sourceforge.net/ 00:07:41.480 00:07:41.480 00:07:41.480 Suite: reservation_suite 00:07:41.480 Test: test_reservation_register ...[2024-06-07 21:04:04.089055] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:41.480 passed 00:07:41.480 Test: test_reservation_reserve ...[2024-06-07 21:04:04.089409] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:41.480 [2024-06-07 21:04:04.089478] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:07:41.480 passed 00:07:41.480 Test: test_reservation_preempt_non_all_regs ...[2024-06-07 21:04:04.089586] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:07:41.480 [2024-06-07 21:04:04.089646] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:41.480 passed 00:07:41.480 Test: test_reservation_preempt_all_regs ...[2024-06-07 21:04:04.089714] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:07:41.480 [2024-06-07 21:04:04.089835] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:41.480 passed 00:07:41.480 Test: test_reservation_cmds_conflict ...[2024-06-07 21:04:04.089971] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:41.480 [2024-06-07 21:04:04.090034] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:07:41.480 [2024-06-07 21:04:04.090071] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:41.480 [2024-06-07 21:04:04.090093] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:41.480 [2024-06-07 21:04:04.090121] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:41.480 [2024-06-07 21:04:04.090145] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:41.480 passed 00:07:41.480 Test: test_scsi2_reserve_release ...passed 00:07:41.480 Test: test_pr_with_scsi2_reserve_release ...[2024-06-07 21:04:04.090233] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:41.480 passed 00:07:41.480 00:07:41.480 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.480 suites 1 1 n/a 0 0 00:07:41.480 tests 7 7 7 0 0 00:07:41.480 asserts 257 257 257 0 n/a 00:07:41.480 00:07:41.480 Elapsed time = 0.001 seconds 00:07:41.480 00:07:41.480 real 0m0.169s 00:07:41.480 user 0m0.077s 00:07:41.480 sys 0m0.091s 00:07:41.480 21:04:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.480 ************************************ 00:07:41.480 END TEST unittest_scsi 00:07:41.480 ************************************ 00:07:41.480 21:04:04 -- common/autotest_common.sh@10 -- # set +x 00:07:41.480 21:04:04 -- unit/unittest.sh@276 -- # uname -s 00:07:41.480 21:04:04 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:07:41.480 21:04:04 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:07:41.480 21:04:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:41.480 21:04:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.480 21:04:04 -- common/autotest_common.sh@10 -- # set +x 00:07:41.740 ************************************ 00:07:41.740 START TEST unittest_sock 00:07:41.740 ************************************ 00:07:41.740 21:04:04 -- common/autotest_common.sh@1104 -- # unittest_sock 00:07:41.740 21:04:04 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:07:41.740 00:07:41.740 00:07:41.740 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.740 http://cunit.sourceforge.net/ 00:07:41.740 00:07:41.740 00:07:41.740 Suite: sock 00:07:41.740 Test: posix_sock ...passed 00:07:41.740 Test: ut_sock ...passed 00:07:41.740 Test: posix_sock_group ...passed 00:07:41.740 Test: ut_sock_group ...passed 00:07:41.740 Test: posix_sock_group_fairness ...passed 00:07:41.740 Test: _posix_sock_close ...passed 00:07:41.740 Test: sock_get_default_opts ...passed 00:07:41.740 Test: ut_sock_impl_get_set_opts ...passed 00:07:41.740 Test: posix_sock_impl_get_set_opts ...passed 00:07:41.740 Test: ut_sock_map ...passed 00:07:41.740 Test: override_impl_opts ...passed 00:07:41.740 Test: ut_sock_group_get_ctx ...passed 00:07:41.740 00:07:41.740 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.740 suites 1 1 n/a 0 0 00:07:41.740 tests 12 12 12 0 0 00:07:41.740 asserts 349 349 349 0 n/a 00:07:41.740 00:07:41.740 Elapsed time = 0.007 seconds 00:07:41.740 21:04:04 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:07:41.740 00:07:41.740 00:07:41.740 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.740 http://cunit.sourceforge.net/ 00:07:41.740 00:07:41.740 00:07:41.740 Suite: posix 00:07:41.740 Test: flush ...passed 00:07:41.740 00:07:41.740 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.740 suites 1 1 n/a 0 0 00:07:41.740 tests 1 1 1 0 0 00:07:41.740 asserts 28 28 28 0 n/a 00:07:41.740 00:07:41.740 Elapsed time = 0.000 seconds 00:07:41.740 21:04:04 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:41.740 00:07:41.740 real 0m0.102s 00:07:41.740 user 0m0.034s 00:07:41.740 sys 0m0.042s 00:07:41.740 21:04:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.740 21:04:04 -- common/autotest_common.sh@10 -- # set +x 00:07:41.740 ************************************ 00:07:41.740 END TEST unittest_sock 00:07:41.740 ************************************ 00:07:41.740 21:04:04 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:41.740 21:04:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:41.740 21:04:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.740 21:04:04 -- common/autotest_common.sh@10 -- # set +x 00:07:41.740 ************************************ 00:07:41.740 START TEST unittest_thread 00:07:41.740 ************************************ 00:07:41.740 21:04:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:41.740 00:07:41.740 00:07:41.740 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.740 http://cunit.sourceforge.net/ 00:07:41.740 00:07:41.740 00:07:41.740 Suite: io_channel 00:07:41.740 Test: thread_alloc ...passed 00:07:41.740 Test: thread_send_msg ...passed 00:07:41.740 Test: thread_poller ...passed 00:07:41.740 Test: poller_pause ...passed 00:07:41.740 Test: thread_for_each ...passed 00:07:41.740 Test: for_each_channel_remove ...passed 00:07:41.740 Test: for_each_channel_unreg ...[2024-06-07 21:04:04.352820] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffd011bade0 already registered (old:0x613000000200 new:0x6130000003c0) 00:07:41.740 passed 00:07:41.740 Test: thread_name ...passed 00:07:41.740 Test: channel ...[2024-06-07 21:04:04.357029] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x55e56c30a0e0 00:07:41.740 passed 00:07:41.740 Test: channel_destroy_races ...passed 00:07:41.740 Test: thread_exit_test ...[2024-06-07 21:04:04.362181] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:07:41.740 passed 00:07:41.740 Test: thread_update_stats_test ...passed 00:07:41.740 Test: nested_channel ...passed 00:07:41.740 Test: device_unregister_and_thread_exit_race ...passed 00:07:41.740 Test: cache_closest_timed_poller ...passed 00:07:41.740 Test: multi_timed_pollers_have_same_expiration ...passed 00:07:41.740 Test: io_device_lookup ...passed 00:07:41.740 Test: spdk_spin ...[2024-06-07 21:04:04.372889] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:41.740 [2024-06-07 21:04:04.372937] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd011badd0 00:07:41.740 [2024-06-07 21:04:04.373025] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:41.740 [2024-06-07 21:04:04.374658] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:41.740 [2024-06-07 21:04:04.374721] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd011badd0 00:07:41.740 [2024-06-07 21:04:04.374746] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:41.740 [2024-06-07 21:04:04.374775] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd011badd0 00:07:41.740 [2024-06-07 21:04:04.374798] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:41.740 [2024-06-07 21:04:04.374842] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd011badd0 00:07:41.740 [2024-06-07 21:04:04.374866] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:07:41.740 [2024-06-07 21:04:04.374906] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd011badd0 00:07:41.740 passed 00:07:41.740 Test: for_each_channel_and_thread_exit_race ...passed 00:07:41.740 Test: for_each_thread_and_thread_exit_race ...passed 00:07:41.740 00:07:41.740 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.740 suites 1 1 n/a 0 0 00:07:41.740 tests 20 20 20 0 0 00:07:41.740 asserts 409 409 409 0 n/a 00:07:41.740 00:07:41.740 Elapsed time = 0.050 seconds 00:07:41.740 00:07:41.741 real 0m0.090s 00:07:41.741 user 0m0.053s 00:07:41.741 sys 0m0.037s 00:07:41.741 21:04:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.741 21:04:04 -- common/autotest_common.sh@10 -- # set +x 00:07:41.741 ************************************ 00:07:41.741 END TEST unittest_thread 00:07:41.741 ************************************ 00:07:41.999 21:04:04 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:41.999 21:04:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:41.999 21:04:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.999 21:04:04 -- common/autotest_common.sh@10 -- # set +x 00:07:41.999 ************************************ 00:07:41.999 START TEST unittest_iobuf 00:07:41.999 ************************************ 00:07:41.999 21:04:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:41.999 00:07:41.999 00:07:41.999 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.000 http://cunit.sourceforge.net/ 00:07:42.000 00:07:42.000 00:07:42.000 Suite: io_channel 00:07:42.000 Test: iobuf ...passed 00:07:42.000 Test: iobuf_cache ...[2024-06-07 21:04:04.480764] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:42.000 [2024-06-07 21:04:04.481278] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:42.000 [2024-06-07 21:04:04.481532] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:07:42.000 [2024-06-07 21:04:04.481683] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:42.000 [2024-06-07 21:04:04.481795] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:42.000 [2024-06-07 21:04:04.481930] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:42.000 passed 00:07:42.000 00:07:42.000 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.000 suites 1 1 n/a 0 0 00:07:42.000 tests 2 2 2 0 0 00:07:42.000 asserts 107 107 107 0 n/a 00:07:42.000 00:07:42.000 Elapsed time = 0.006 seconds 00:07:42.000 00:07:42.000 real 0m0.041s 00:07:42.000 user 0m0.024s 00:07:42.000 sys 0m0.016s 00:07:42.000 21:04:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.000 ************************************ 00:07:42.000 END TEST unittest_iobuf 00:07:42.000 ************************************ 00:07:42.000 21:04:04 -- common/autotest_common.sh@10 -- # set +x 00:07:42.000 21:04:04 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:07:42.000 21:04:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:42.000 21:04:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.000 21:04:04 -- common/autotest_common.sh@10 -- # set +x 00:07:42.000 ************************************ 00:07:42.000 START TEST unittest_util 00:07:42.000 ************************************ 00:07:42.000 21:04:04 -- common/autotest_common.sh@1104 -- # unittest_util 00:07:42.000 21:04:04 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:07:42.000 00:07:42.000 00:07:42.000 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.000 http://cunit.sourceforge.net/ 00:07:42.000 00:07:42.000 00:07:42.000 Suite: base64 00:07:42.000 Test: test_base64_get_encoded_strlen ...passed 00:07:42.000 Test: test_base64_get_decoded_len ...passed 00:07:42.000 Test: test_base64_encode ...passed 00:07:42.000 Test: test_base64_decode ...passed 00:07:42.000 Test: test_base64_urlsafe_encode ...passed 00:07:42.000 Test: test_base64_urlsafe_decode ...passed 00:07:42.000 00:07:42.000 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.000 suites 1 1 n/a 0 0 00:07:42.000 tests 6 6 6 0 0 00:07:42.000 asserts 112 112 112 0 n/a 00:07:42.000 00:07:42.000 Elapsed time = 0.000 seconds 00:07:42.000 21:04:04 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:07:42.000 00:07:42.000 00:07:42.000 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.000 http://cunit.sourceforge.net/ 00:07:42.000 00:07:42.000 00:07:42.000 Suite: bit_array 00:07:42.000 Test: test_1bit ...passed 00:07:42.000 Test: test_64bit ...passed 00:07:42.000 Test: test_find ...passed 00:07:42.000 Test: test_resize ...passed 00:07:42.000 Test: test_errors ...passed 00:07:42.000 Test: test_count ...passed 00:07:42.000 Test: test_mask_store_load ...passed 00:07:42.000 Test: test_mask_clear ...passed 00:07:42.000 00:07:42.000 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.000 suites 1 1 n/a 0 0 00:07:42.000 tests 8 8 8 0 0 00:07:42.000 asserts 5075 5075 5075 0 n/a 00:07:42.000 00:07:42.000 Elapsed time = 0.002 seconds 00:07:42.000 21:04:04 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:07:42.000 00:07:42.000 00:07:42.000 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.000 http://cunit.sourceforge.net/ 00:07:42.000 00:07:42.000 00:07:42.000 Suite: cpuset 00:07:42.000 Test: test_cpuset ...passed 00:07:42.000 Test: test_cpuset_parse ...[2024-06-07 21:04:04.634097] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:07:42.000 [2024-06-07 21:04:04.634513] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:07:42.000 [2024-06-07 21:04:04.634615] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:07:42.000 [2024-06-07 21:04:04.634699] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:07:42.000 [2024-06-07 21:04:04.634734] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:07:42.000 [2024-06-07 21:04:04.634769] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:07:42.000 passed 00:07:42.000 Test: test_cpuset_fmt ...[2024-06-07 21:04:04.634796] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:07:42.000 [2024-06-07 21:04:04.634845] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:07:42.000 passed 00:07:42.000 00:07:42.000 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.000 suites 1 1 n/a 0 0 00:07:42.000 tests 3 3 3 0 0 00:07:42.000 asserts 65 65 65 0 n/a 00:07:42.000 00:07:42.000 Elapsed time = 0.002 seconds 00:07:42.000 21:04:04 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:07:42.000 00:07:42.000 00:07:42.000 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.000 http://cunit.sourceforge.net/ 00:07:42.000 00:07:42.000 00:07:42.000 Suite: crc16 00:07:42.000 Test: test_crc16_t10dif ...passed 00:07:42.000 Test: test_crc16_t10dif_seed ...passed 00:07:42.000 Test: test_crc16_t10dif_copy ...passed 00:07:42.000 00:07:42.000 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.000 suites 1 1 n/a 0 0 00:07:42.000 tests 3 3 3 0 0 00:07:42.000 asserts 5 5 5 0 n/a 00:07:42.000 00:07:42.000 Elapsed time = 0.000 seconds 00:07:42.260 21:04:04 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:07:42.260 00:07:42.260 00:07:42.260 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.260 http://cunit.sourceforge.net/ 00:07:42.260 00:07:42.260 00:07:42.260 Suite: crc32_ieee 00:07:42.260 Test: test_crc32_ieee ...passed 00:07:42.260 00:07:42.260 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.260 suites 1 1 n/a 0 0 00:07:42.260 tests 1 1 1 0 0 00:07:42.260 asserts 1 1 1 0 n/a 00:07:42.260 00:07:42.260 Elapsed time = 0.000 seconds 00:07:42.260 21:04:04 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:07:42.260 00:07:42.260 00:07:42.260 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.260 http://cunit.sourceforge.net/ 00:07:42.260 00:07:42.260 00:07:42.260 Suite: crc32c 00:07:42.260 Test: test_crc32c ...passed 00:07:42.260 Test: test_crc32c_nvme ...passed 00:07:42.260 00:07:42.260 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.260 suites 1 1 n/a 0 0 00:07:42.260 tests 2 2 2 0 0 00:07:42.260 asserts 16 16 16 0 n/a 00:07:42.260 00:07:42.260 Elapsed time = 0.000 seconds 00:07:42.260 21:04:04 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:07:42.260 00:07:42.260 00:07:42.260 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.260 http://cunit.sourceforge.net/ 00:07:42.260 00:07:42.260 00:07:42.260 Suite: crc64 00:07:42.260 Test: test_crc64_nvme ...passed 00:07:42.260 00:07:42.260 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.260 suites 1 1 n/a 0 0 00:07:42.260 tests 1 1 1 0 0 00:07:42.260 asserts 4 4 4 0 n/a 00:07:42.260 00:07:42.260 Elapsed time = 0.000 seconds 00:07:42.260 21:04:04 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:07:42.260 00:07:42.260 00:07:42.260 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.260 http://cunit.sourceforge.net/ 00:07:42.260 00:07:42.260 00:07:42.260 Suite: string 00:07:42.260 Test: test_parse_ip_addr ...passed 00:07:42.260 Test: test_str_chomp ...passed 00:07:42.260 Test: test_parse_capacity ...passed 00:07:42.260 Test: test_sprintf_append_realloc ...passed 00:07:42.260 Test: test_strtol ...passed 00:07:42.260 Test: test_strtoll ...passed 00:07:42.260 Test: test_strarray ...passed 00:07:42.260 Test: test_strcpy_replace ...passed 00:07:42.260 00:07:42.260 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.260 suites 1 1 n/a 0 0 00:07:42.260 tests 8 8 8 0 0 00:07:42.260 asserts 161 161 161 0 n/a 00:07:42.260 00:07:42.261 Elapsed time = 0.001 seconds 00:07:42.261 21:04:04 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:07:42.261 00:07:42.261 00:07:42.261 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.261 http://cunit.sourceforge.net/ 00:07:42.261 00:07:42.261 00:07:42.261 Suite: dif 00:07:42.261 Test: dif_generate_and_verify_test ...[2024-06-07 21:04:04.825005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:42.261 [2024-06-07 21:04:04.825547] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:42.261 [2024-06-07 21:04:04.825833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:42.261 [2024-06-07 21:04:04.826108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:42.261 [2024-06-07 21:04:04.826380] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:42.261 [2024-06-07 21:04:04.826675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:42.261 passed 00:07:42.261 Test: dif_disable_check_test ...[2024-06-07 21:04:04.827682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:42.261 [2024-06-07 21:04:04.828023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:42.261 [2024-06-07 21:04:04.828305] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:42.261 passed 00:07:42.261 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-06-07 21:04:04.829373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:07:42.261 [2024-06-07 21:04:04.829685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:07:42.261 [2024-06-07 21:04:04.830012] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:07:42.261 [2024-06-07 21:04:04.830373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:07:42.261 [2024-06-07 21:04:04.830704] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:42.261 [2024-06-07 21:04:04.831006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:42.261 [2024-06-07 21:04:04.831315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:42.261 [2024-06-07 21:04:04.831612] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:42.261 [2024-06-07 21:04:04.831943] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:42.261 [2024-06-07 21:04:04.832267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:42.261 [2024-06-07 21:04:04.832586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:42.261 passed 00:07:42.261 Test: dif_apptag_mask_test ...[2024-06-07 21:04:04.832928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:42.261 [2024-06-07 21:04:04.833232] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:42.261 passed 00:07:42.261 Test: dif_sec_512_md_0_error_test ...[2024-06-07 21:04:04.833445] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:42.261 passed 00:07:42.261 Test: dif_sec_4096_md_0_error_test ...passed 00:07:42.261 Test: dif_sec_4100_md_128_error_test ...[2024-06-07 21:04:04.833480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:42.261 [2024-06-07 21:04:04.833515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:42.261 [2024-06-07 21:04:04.833562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:42.261 [2024-06-07 21:04:04.833593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:42.261 passed 00:07:42.261 Test: dif_guard_seed_test ...passed 00:07:42.261 Test: dif_guard_value_test ...passed 00:07:42.261 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:07:42.261 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:07:42.261 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:42.261 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:42.261 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:42.261 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:07:42.261 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:42.261 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:42.261 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:07:42.261 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:42.261 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:07:42.261 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:07:42.261 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:42.261 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:42.261 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:42.261 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:42.261 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:42.261 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:42.261 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-07 21:04:04.878059] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd5c, Actual=fd4c 00:07:42.261 [2024-06-07 21:04:04.880529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fe31, Actual=fe21 00:07:42.261 [2024-06-07 21:04:04.882973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=98 00:07:42.261 [2024-06-07 21:04:04.885410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=98 00:07:42.261 [2024-06-07 21:04:04.887865] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=10005b 00:07:42.261 [2024-06-07 21:04:04.890306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=10005b 00:07:42.261 [2024-06-07 21:04:04.892726] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=fb79 00:07:42.261 [2024-06-07 21:04:04.894244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fe21, Actual=dce 00:07:42.261 [2024-06-07 21:04:04.895763] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1aa753ed, Actual=1ab753ed 00:07:42.261 [2024-06-07 21:04:04.898202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=38474660, Actual=38574660 00:07:42.261 [2024-06-07 21:04:04.900642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=98 00:07:42.261 [2024-06-07 21:04:04.903084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=98 00:07:42.261 [2024-06-07 21:04:04.905522] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1000000000005b 00:07:42.261 [2024-06-07 21:04:04.907925] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1000000000005b 00:07:42.261 [2024-06-07 21:04:04.910367] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=4ef99cbe 00:07:42.261 [2024-06-07 21:04:04.911893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=38574660, Actual=d0c87eb4 00:07:42.261 [2024-06-07 21:04:04.913446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:42.261 [2024-06-07 21:04:04.915861] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=88110a2d4837a266, Actual=88010a2d4837a266 00:07:42.261 [2024-06-07 21:04:04.918288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=98 00:07:42.261 [2024-06-07 21:04:04.920719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=98 00:07:42.261 [2024-06-07 21:04:04.923136] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=10005b 00:07:42.261 [2024-06-07 21:04:04.925571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=10005b 00:07:42.261 [2024-06-07 21:04:04.928003] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=a65c26724db0bf48 00:07:42.261 [2024-06-07 21:04:04.929537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=88010a2d4837a266, Actual=49010ade89cf1800 00:07:42.261 passed 00:07:42.261 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-06-07 21:04:04.930167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:07:42.261 [2024-06-07 21:04:04.930470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:07:42.261 [2024-06-07 21:04:04.930752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.261 [2024-06-07 21:04:04.931044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.261 [2024-06-07 21:04:04.931356] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.261 [2024-06-07 21:04:04.931641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.261 [2024-06-07 21:04:04.931933] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=fb79 00:07:42.261 [2024-06-07 21:04:04.932112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=dce 00:07:42.262 [2024-06-07 21:04:04.932304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1aa753ed, Actual=1ab753ed 00:07:42.262 [2024-06-07 21:04:04.932585] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38474660, Actual=38574660 00:07:42.262 [2024-06-07 21:04:04.932911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.262 [2024-06-07 21:04:04.933210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.262 [2024-06-07 21:04:04.933507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058 00:07:42.262 [2024-06-07 21:04:04.933788] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058 00:07:42.262 [2024-06-07 21:04:04.934078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=4ef99cbe 00:07:42.262 [2024-06-07 21:04:04.934266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=d0c87eb4 00:07:42.262 [2024-06-07 21:04:04.934468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:42.262 [2024-06-07 21:04:04.934753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88110a2d4837a266, Actual=88010a2d4837a266 00:07:42.262 [2024-06-07 21:04:04.935041] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.262 [2024-06-07 21:04:04.935325] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.262 [2024-06-07 21:04:04.935625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.522 [2024-06-07 21:04:04.935905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.522 [2024-06-07 21:04:04.936206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=a65c26724db0bf48 00:07:42.522 [2024-06-07 21:04:04.936398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=49010ade89cf1800 00:07:42.522 passed 00:07:42.522 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-06-07 21:04:04.936625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:07:42.522 [2024-06-07 21:04:04.936977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:07:42.522 [2024-06-07 21:04:04.937286] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.522 [2024-06-07 21:04:04.937578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.522 [2024-06-07 21:04:04.937887] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.522 [2024-06-07 21:04:04.938192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.522 [2024-06-07 21:04:04.938479] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=fb79 00:07:42.522 [2024-06-07 21:04:04.938667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=dce 00:07:42.522 [2024-06-07 21:04:04.938852] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1aa753ed, Actual=1ab753ed 00:07:42.522 [2024-06-07 21:04:04.939156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38474660, Actual=38574660 00:07:42.522 [2024-06-07 21:04:04.939442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.522 [2024-06-07 21:04:04.939732] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.522 [2024-06-07 21:04:04.940021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058 00:07:42.522 [2024-06-07 21:04:04.940310] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058 00:07:42.522 [2024-06-07 21:04:04.940594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=4ef99cbe 00:07:42.522 [2024-06-07 21:04:04.940800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=d0c87eb4 00:07:42.522 [2024-06-07 21:04:04.941021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:42.522 [2024-06-07 21:04:04.941308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88110a2d4837a266, Actual=88010a2d4837a266 00:07:42.522 [2024-06-07 21:04:04.941609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.522 [2024-06-07 21:04:04.941905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.942202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.523 [2024-06-07 21:04:04.942484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.523 [2024-06-07 21:04:04.942789] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=a65c26724db0bf48 00:07:42.523 [2024-06-07 21:04:04.942971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=49010ade89cf1800 00:07:42.523 passed 00:07:42.523 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-06-07 21:04:04.943201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:07:42.523 [2024-06-07 21:04:04.943508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:07:42.523 [2024-06-07 21:04:04.943805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.944089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.944413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.523 [2024-06-07 21:04:04.944716] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.523 [2024-06-07 21:04:04.945023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=fb79 00:07:42.523 [2024-06-07 21:04:04.945218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=dce 00:07:42.523 [2024-06-07 21:04:04.945412] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1aa753ed, Actual=1ab753ed 00:07:42.523 [2024-06-07 21:04:04.945696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38474660, Actual=38574660 00:07:42.523 [2024-06-07 21:04:04.946004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.946299] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.946583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058 00:07:42.523 [2024-06-07 21:04:04.946876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058 00:07:42.523 [2024-06-07 21:04:04.947166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=4ef99cbe 00:07:42.523 [2024-06-07 21:04:04.947355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=d0c87eb4 00:07:42.523 [2024-06-07 21:04:04.947553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:42.523 [2024-06-07 21:04:04.947844] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88110a2d4837a266, Actual=88010a2d4837a266 00:07:42.523 [2024-06-07 21:04:04.948129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.948420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.948719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.523 [2024-06-07 21:04:04.949026] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.523 [2024-06-07 21:04:04.949347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=a65c26724db0bf48 00:07:42.523 [2024-06-07 21:04:04.949538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=49010ade89cf1800 00:07:42.523 passed 00:07:42.523 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-06-07 21:04:04.949800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:07:42.523 [2024-06-07 21:04:04.950095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:07:42.523 [2024-06-07 21:04:04.950388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.950681] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.950990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.523 [2024-06-07 21:04:04.951282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.523 [2024-06-07 21:04:04.951584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=fb79 00:07:42.523 [2024-06-07 21:04:04.951773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=dce 00:07:42.523 passed 00:07:42.523 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-06-07 21:04:04.952012] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1aa753ed, Actual=1ab753ed 00:07:42.523 [2024-06-07 21:04:04.952313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38474660, Actual=38574660 00:07:42.523 [2024-06-07 21:04:04.952623] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.952932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.953230] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058 00:07:42.523 [2024-06-07 21:04:04.953523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058 00:07:42.523 [2024-06-07 21:04:04.953821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=4ef99cbe 00:07:42.523 [2024-06-07 21:04:04.954004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=d0c87eb4 00:07:42.523 [2024-06-07 21:04:04.954256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:42.523 [2024-06-07 21:04:04.954562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88110a2d4837a266, Actual=88010a2d4837a266 00:07:42.523 [2024-06-07 21:04:04.954867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.955168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.955452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.523 [2024-06-07 21:04:04.955753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.523 [2024-06-07 21:04:04.956069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=a65c26724db0bf48 00:07:42.523 [2024-06-07 21:04:04.956259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=49010ade89cf1800 00:07:42.523 passed 00:07:42.523 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-06-07 21:04:04.956502] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:07:42.523 [2024-06-07 21:04:04.956821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:07:42.523 [2024-06-07 21:04:04.957130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.957431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.957753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.523 [2024-06-07 21:04:04.958037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.523 [2024-06-07 21:04:04.958328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=fb79 00:07:42.523 [2024-06-07 21:04:04.958507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=dce 00:07:42.523 passed 00:07:42.523 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-06-07 21:04:04.958751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1aa753ed, Actual=1ab753ed 00:07:42.523 [2024-06-07 21:04:04.959055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38474660, Actual=38574660 00:07:42.523 [2024-06-07 21:04:04.959374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.959673] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.523 [2024-06-07 21:04:04.959975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058 00:07:42.523 [2024-06-07 21:04:04.960271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058 00:07:42.524 [2024-06-07 21:04:04.960564] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=4ef99cbe 00:07:42.524 [2024-06-07 21:04:04.960753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=d0c87eb4 00:07:42.524 [2024-06-07 21:04:04.961021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:42.524 [2024-06-07 21:04:04.961314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88110a2d4837a266, Actual=88010a2d4837a266 00:07:42.524 [2024-06-07 21:04:04.961622] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.524 [2024-06-07 21:04:04.961906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:42.524 [2024-06-07 21:04:04.962208] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.524 [2024-06-07 21:04:04.962490] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:42.524 [2024-06-07 21:04:04.962799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=a65c26724db0bf48 00:07:42.524 [2024-06-07 21:04:04.963015] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=49010ade89cf1800 00:07:42.524 passed 00:07:42.524 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:07:42.524 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:42.524 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:42.524 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:42.524 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:42.524 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:42.524 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:42.524 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:42.524 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:42.524 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-07 21:04:05.007005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd48, Actual=fd4c 00:07:42.524 [2024-06-07 21:04:05.008122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=98b4, Actual=98b0 00:07:42.524 [2024-06-07 21:04:05.009230] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:07:42.524 [2024-06-07 21:04:05.010320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:07:42.524 [2024-06-07 21:04:05.011420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4005d 00:07:42.524 [2024-06-07 21:04:05.012523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4005d 00:07:42.524 [2024-06-07 21:04:05.013630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=efc2 00:07:42.524 [2024-06-07 21:04:05.014708] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=62ae 00:07:42.524 [2024-06-07 21:04:05.015830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab353ed, Actual=1ab753ed 00:07:42.524 [2024-06-07 21:04:05.016943] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5f73c241, Actual=5f77c241 00:07:42.524 [2024-06-07 21:04:05.018063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:07:42.524 [2024-06-07 21:04:05.019176] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:07:42.524 [2024-06-07 21:04:05.020271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400000000005d 00:07:42.524 [2024-06-07 21:04:05.021405] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400000000005d 00:07:42.524 [2024-06-07 21:04:05.022517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=909541d2 00:07:42.524 [2024-06-07 21:04:05.023611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=1bf982a7 00:07:42.524 [2024-06-07 21:04:05.024702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a572a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:42.524 [2024-06-07 21:04:05.025838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=d5379796f8d40b1f, Actual=d5339796f8d40b1f 00:07:42.524 [2024-06-07 21:04:05.026936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:07:42.524 [2024-06-07 21:04:05.028032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:07:42.524 [2024-06-07 21:04:05.029162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=59 00:07:42.524 [2024-06-07 21:04:05.030282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=59 00:07:42.524 [2024-06-07 21:04:05.031363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=82482540d853028a 00:07:42.524 passed 00:07:42.524 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-06-07 21:04:05.032491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=7de3058c61e7c179 00:07:42.524 [2024-06-07 21:04:05.032874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd48, Actual=fd4c 00:07:42.524 [2024-06-07 21:04:05.033183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fb35, Actual=fb31 00:07:42.524 [2024-06-07 21:04:05.033484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:07:42.524 [2024-06-07 21:04:05.033743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:07:42.524 [2024-06-07 21:04:05.034028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:07:42.524 [2024-06-07 21:04:05.034323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:07:42.524 [2024-06-07 21:04:05.034580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=efc2 00:07:42.524 [2024-06-07 21:04:05.034847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=12f 00:07:42.524 [2024-06-07 21:04:05.035105] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab353ed, Actual=1ab753ed 00:07:42.524 [2024-06-07 21:04:05.035381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9ef3f7b4, Actual=9ef7f7b4 00:07:42.524 [2024-06-07 21:04:05.035653] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:07:42.524 [2024-06-07 21:04:05.035930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:07:42.524 [2024-06-07 21:04:05.036192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000059 00:07:42.524 [2024-06-07 21:04:05.036473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000059 00:07:42.524 [2024-06-07 21:04:05.036743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=909541d2 00:07:42.524 [2024-06-07 21:04:05.037030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=da79b752 00:07:42.524 [2024-06-07 21:04:05.037319] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a572a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:42.524 [2024-06-07 21:04:05.037579] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=20d59805de3b8f40, Actual=20d19805de3b8f40 00:07:42.524 [2024-06-07 21:04:05.037848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:07:42.524 [2024-06-07 21:04:05.038105] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:07:42.524 [2024-06-07 21:04:05.038380] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:07:42.524 [2024-06-07 21:04:05.038638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:07:42.524 [2024-06-07 21:04:05.038918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=82482540d853028a 00:07:42.524 [2024-06-07 21:04:05.039188] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=88010a1f47084526 00:07:42.524 passed 00:07:42.524 Test: dix_sec_512_md_0_error ...[2024-06-07 21:04:05.039257] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:42.524 passed 00:07:42.524 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:07:42.524 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:42.524 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:42.524 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:42.524 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:42.524 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:42.524 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:42.524 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:42.524 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:42.524 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-07 21:04:05.082684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd48, Actual=fd4c 00:07:42.524 [2024-06-07 21:04:05.083820] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=98b4, Actual=98b0 00:07:42.524 [2024-06-07 21:04:05.084938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:07:42.524 [2024-06-07 21:04:05.086033] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:07:42.524 [2024-06-07 21:04:05.087142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4005d 00:07:42.524 [2024-06-07 21:04:05.088238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4005d 00:07:42.525 [2024-06-07 21:04:05.089332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=efc2 00:07:42.525 [2024-06-07 21:04:05.090434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=62ae 00:07:42.525 [2024-06-07 21:04:05.091531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab353ed, Actual=1ab753ed 00:07:42.525 [2024-06-07 21:04:05.092623] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5f73c241, Actual=5f77c241 00:07:42.525 [2024-06-07 21:04:05.093770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:07:42.525 [2024-06-07 21:04:05.094866] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:07:42.525 [2024-06-07 21:04:05.095946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400000000005d 00:07:42.525 [2024-06-07 21:04:05.097057] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400000000005d 00:07:42.525 [2024-06-07 21:04:05.098142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=909541d2 00:07:42.525 [2024-06-07 21:04:05.099231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=1bf982a7 00:07:42.525 [2024-06-07 21:04:05.100332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a572a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:42.525 [2024-06-07 21:04:05.101442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=d5379796f8d40b1f, Actual=d5339796f8d40b1f 00:07:42.525 [2024-06-07 21:04:05.102529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:07:42.525 [2024-06-07 21:04:05.103604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:07:42.525 [2024-06-07 21:04:05.104729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=59 00:07:42.525 [2024-06-07 21:04:05.105819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=59 00:07:42.525 [2024-06-07 21:04:05.106933] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=82482540d853028a 00:07:42.525 passed 00:07:42.525 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-06-07 21:04:05.108008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=7de3058c61e7c179 00:07:42.525 [2024-06-07 21:04:05.108401] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd48, Actual=fd4c 00:07:42.525 [2024-06-07 21:04:05.108675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fb35, Actual=fb31 00:07:42.525 [2024-06-07 21:04:05.108971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:07:42.525 [2024-06-07 21:04:05.109256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:07:42.525 [2024-06-07 21:04:05.109550] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:07:42.525 [2024-06-07 21:04:05.109809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:07:42.525 [2024-06-07 21:04:05.110072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=efc2 00:07:42.525 [2024-06-07 21:04:05.110326] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=12f 00:07:42.525 [2024-06-07 21:04:05.110595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab353ed, Actual=1ab753ed 00:07:42.525 [2024-06-07 21:04:05.110855] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9ef3f7b4, Actual=9ef7f7b4 00:07:42.525 [2024-06-07 21:04:05.111132] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:07:42.525 [2024-06-07 21:04:05.111398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:07:42.525 [2024-06-07 21:04:05.111666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000059 00:07:42.525 [2024-06-07 21:04:05.111930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000059 00:07:42.525 [2024-06-07 21:04:05.112189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=909541d2 00:07:42.525 [2024-06-07 21:04:05.112467] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=da79b752 00:07:42.525 [2024-06-07 21:04:05.112745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a572a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:42.525 [2024-06-07 21:04:05.113024] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=20d59805de3b8f40, Actual=20d19805de3b8f40 00:07:42.525 [2024-06-07 21:04:05.113281] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:07:42.525 [2024-06-07 21:04:05.113546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:07:42.525 [2024-06-07 21:04:05.113810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:07:42.525 [2024-06-07 21:04:05.114076] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=5d 00:07:42.525 [2024-06-07 21:04:05.114335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=82482540d853028a 00:07:42.525 [2024-06-07 21:04:05.114595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=88010a1f47084526 00:07:42.525 passed 00:07:42.525 Test: set_md_interleave_iovs_test ...passed 00:07:42.525 Test: set_md_interleave_iovs_split_test ...passed 00:07:42.525 Test: dif_generate_stream_pi_16_test ...passed 00:07:42.525 Test: dif_generate_stream_test ...passed 00:07:42.525 Test: set_md_interleave_iovs_alignment_test ...passed 00:07:42.525 Test: dif_generate_split_test ...[2024-06-07 21:04:05.122205] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:07:42.525 passed 00:07:42.525 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:07:42.525 Test: dif_verify_split_test ...passed 00:07:42.525 Test: dif_verify_stream_multi_segments_test ...passed 00:07:42.525 Test: update_crc32c_pi_16_test ...passed 00:07:42.525 Test: update_crc32c_test ...passed 00:07:42.525 Test: dif_update_crc32c_split_test ...passed 00:07:42.525 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:07:42.525 Test: get_range_with_md_test ...passed 00:07:42.525 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:07:42.525 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:07:42.525 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:42.525 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:07:42.525 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:07:42.525 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:42.525 Test: dif_generate_and_verify_unmap_test ...passed 00:07:42.525 00:07:42.525 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.525 suites 1 1 n/a 0 0 00:07:42.525 tests 79 79 79 0 0 00:07:42.525 asserts 3584 3584 3584 0 n/a 00:07:42.525 00:07:42.525 Elapsed time = 0.343 seconds 00:07:42.525 21:04:05 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:07:42.784 00:07:42.784 00:07:42.784 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.785 http://cunit.sourceforge.net/ 00:07:42.785 00:07:42.785 00:07:42.785 Suite: iov 00:07:42.785 Test: test_single_iov ...passed 00:07:42.785 Test: test_simple_iov ...passed 00:07:42.785 Test: test_complex_iov ...passed 00:07:42.785 Test: test_iovs_to_buf ...passed 00:07:42.785 Test: test_buf_to_iovs ...passed 00:07:42.785 Test: test_memset ...passed 00:07:42.785 Test: test_iov_one ...passed 00:07:42.785 Test: test_iov_xfer ...passed 00:07:42.785 00:07:42.785 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.785 suites 1 1 n/a 0 0 00:07:42.785 tests 8 8 8 0 0 00:07:42.785 asserts 156 156 156 0 n/a 00:07:42.785 00:07:42.785 Elapsed time = 0.000 seconds 00:07:42.785 21:04:05 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:07:42.785 00:07:42.785 00:07:42.785 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.785 http://cunit.sourceforge.net/ 00:07:42.785 00:07:42.785 00:07:42.785 Suite: math 00:07:42.785 Test: test_serial_number_arithmetic ...passed 00:07:42.785 Suite: erase 00:07:42.785 Test: test_memset_s ...passed 00:07:42.785 00:07:42.785 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.785 suites 2 2 n/a 0 0 00:07:42.785 tests 2 2 2 0 0 00:07:42.785 asserts 18 18 18 0 n/a 00:07:42.785 00:07:42.785 Elapsed time = 0.000 seconds 00:07:42.785 21:04:05 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:07:42.785 00:07:42.785 00:07:42.785 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.785 http://cunit.sourceforge.net/ 00:07:42.785 00:07:42.785 00:07:42.785 Suite: pipe 00:07:42.785 Test: test_create_destroy ...passed 00:07:42.785 Test: test_write_get_buffer ...passed 00:07:42.785 Test: test_write_advance ...passed 00:07:42.785 Test: test_read_get_buffer ...passed 00:07:42.785 Test: test_read_advance ...passed 00:07:42.785 Test: test_data ...passed 00:07:42.785 00:07:42.785 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.785 suites 1 1 n/a 0 0 00:07:42.785 tests 6 6 6 0 0 00:07:42.785 asserts 250 250 250 0 n/a 00:07:42.785 00:07:42.785 Elapsed time = 0.000 seconds 00:07:42.785 21:04:05 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:07:42.785 00:07:42.785 00:07:42.785 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.785 http://cunit.sourceforge.net/ 00:07:42.785 00:07:42.785 00:07:42.785 Suite: xor 00:07:42.785 Test: test_xor_gen ...passed 00:07:42.785 00:07:42.785 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.785 suites 1 1 n/a 0 0 00:07:42.785 tests 1 1 1 0 0 00:07:42.785 asserts 17 17 17 0 n/a 00:07:42.785 00:07:42.785 Elapsed time = 0.006 seconds 00:07:42.785 00:07:42.785 real 0m0.754s 00:07:42.785 user 0m0.581s 00:07:42.785 sys 0m0.178s 00:07:42.785 21:04:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.785 21:04:05 -- common/autotest_common.sh@10 -- # set +x 00:07:42.785 ************************************ 00:07:42.785 END TEST unittest_util 00:07:42.785 ************************************ 00:07:42.785 21:04:05 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:42.785 21:04:05 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:42.785 21:04:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:42.785 21:04:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.785 21:04:05 -- common/autotest_common.sh@10 -- # set +x 00:07:42.785 ************************************ 00:07:42.785 START TEST unittest_vhost 00:07:42.785 ************************************ 00:07:42.785 21:04:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:42.785 00:07:42.785 00:07:42.785 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.785 http://cunit.sourceforge.net/ 00:07:42.785 00:07:42.785 00:07:42.785 Suite: vhost_suite 00:07:42.785 Test: desc_to_iov_test ...[2024-06-07 21:04:05.391098] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:07:42.785 passed 00:07:42.785 Test: create_controller_test ...[2024-06-07 21:04:05.395773] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:42.785 [2024-06-07 21:04:05.396015] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:07:42.785 [2024-06-07 21:04:05.396243] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:42.785 [2024-06-07 21:04:05.396426] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:07:42.785 [2024-06-07 21:04:05.396588] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:07:42.785 [2024-06-07 21:04:05.396854] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-06-07 21:04:05.398144] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:07:42.785 passed 00:07:42.785 Test: session_find_by_vid_test ...passed 00:07:42.785 Test: remove_controller_test ...[2024-06-07 21:04:05.400690] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:07:42.785 passed 00:07:42.785 Test: vq_avail_ring_get_test ...passed 00:07:42.785 Test: vq_packed_ring_test ...passed 00:07:42.785 Test: vhost_blk_construct_test ...passed 00:07:42.785 00:07:42.785 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.785 suites 1 1 n/a 0 0 00:07:42.785 tests 7 7 7 0 0 00:07:42.785 asserts 145 145 145 0 n/a 00:07:42.785 00:07:42.785 Elapsed time = 0.012 seconds 00:07:42.785 00:07:42.785 real 0m0.052s 00:07:42.785 user 0m0.020s 00:07:42.785 sys 0m0.029s 00:07:42.785 21:04:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.785 21:04:05 -- common/autotest_common.sh@10 -- # set +x 00:07:42.785 ************************************ 00:07:42.785 END TEST unittest_vhost 00:07:42.785 ************************************ 00:07:43.045 21:04:05 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:43.045 21:04:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:43.045 21:04:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.045 21:04:05 -- common/autotest_common.sh@10 -- # set +x 00:07:43.045 ************************************ 00:07:43.045 START TEST unittest_dma 00:07:43.045 ************************************ 00:07:43.045 21:04:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:43.045 00:07:43.045 00:07:43.045 CUnit - A unit testing framework for C - Version 2.1-3 00:07:43.045 http://cunit.sourceforge.net/ 00:07:43.045 00:07:43.045 00:07:43.045 Suite: dma_suite 00:07:43.045 Test: test_dma ...[2024-06-07 21:04:05.489355] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:07:43.045 passed 00:07:43.045 00:07:43.045 Run Summary: Type Total Ran Passed Failed Inactive 00:07:43.045 suites 1 1 n/a 0 0 00:07:43.045 tests 1 1 1 0 0 00:07:43.045 asserts 50 50 50 0 n/a 00:07:43.045 00:07:43.045 Elapsed time = 0.000 seconds 00:07:43.045 00:07:43.045 real 0m0.029s 00:07:43.045 user 0m0.013s 00:07:43.045 sys 0m0.016s 00:07:43.045 21:04:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.045 ************************************ 00:07:43.045 21:04:05 -- common/autotest_common.sh@10 -- # set +x 00:07:43.045 END TEST unittest_dma 00:07:43.045 ************************************ 00:07:43.045 21:04:05 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:07:43.045 21:04:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:43.045 21:04:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.045 21:04:05 -- common/autotest_common.sh@10 -- # set +x 00:07:43.045 ************************************ 00:07:43.045 START TEST unittest_init 00:07:43.045 ************************************ 00:07:43.045 21:04:05 -- common/autotest_common.sh@1104 -- # unittest_init 00:07:43.045 21:04:05 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:07:43.045 00:07:43.045 00:07:43.045 CUnit - A unit testing framework for C - Version 2.1-3 00:07:43.045 http://cunit.sourceforge.net/ 00:07:43.045 00:07:43.045 00:07:43.045 Suite: subsystem_suite 00:07:43.045 Test: subsystem_sort_test_depends_on_single ...passed 00:07:43.045 Test: subsystem_sort_test_depends_on_multiple ...passed 00:07:43.045 Test: subsystem_sort_test_missing_dependency ...[2024-06-07 21:04:05.576596] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:07:43.045 passed 00:07:43.045 00:07:43.045 [2024-06-07 21:04:05.577021] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:07:43.045 Run Summary: Type Total Ran Passed Failed Inactive 00:07:43.045 suites 1 1 n/a 0 0 00:07:43.045 tests 3 3 3 0 0 00:07:43.045 asserts 20 20 20 0 n/a 00:07:43.045 00:07:43.045 Elapsed time = 0.001 seconds 00:07:43.045 00:07:43.045 real 0m0.039s 00:07:43.045 user 0m0.021s 00:07:43.045 sys 0m0.019s 00:07:43.045 21:04:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.045 ************************************ 00:07:43.045 END TEST unittest_init 00:07:43.045 ************************************ 00:07:43.045 21:04:05 -- common/autotest_common.sh@10 -- # set +x 00:07:43.045 21:04:05 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:07:43.045 21:04:05 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:07:43.045 21:04:05 -- unit/unittest.sh@290 -- # hostname 00:07:43.045 21:04:05 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:43.303 geninfo: WARNING: invalid characters removed from testname! 00:08:09.854 21:04:31 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:08:14.040 21:04:36 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:16.590 21:04:39 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:19.890 21:04:42 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:22.420 21:04:44 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:24.978 21:04:47 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:27.511 21:04:50 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:30.050 21:04:52 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:30.050 21:04:52 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:30.616 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:30.616 Found 309 entries. 00:08:30.616 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:08:30.616 Writing .css and .png files. 00:08:30.616 Generating output. 00:08:30.616 Processing file include/linux/virtio_ring.h 00:08:30.874 Processing file include/spdk/base64.h 00:08:30.874 Processing file include/spdk/endian.h 00:08:30.874 Processing file include/spdk/util.h 00:08:30.874 Processing file include/spdk/nvme_spec.h 00:08:30.874 Processing file include/spdk/mmio.h 00:08:30.874 Processing file include/spdk/bdev_module.h 00:08:30.874 Processing file include/spdk/thread.h 00:08:30.874 Processing file include/spdk/nvmf_transport.h 00:08:30.874 Processing file include/spdk/trace.h 00:08:30.874 Processing file include/spdk/nvme.h 00:08:30.874 Processing file include/spdk/histogram_data.h 00:08:31.132 Processing file include/spdk_internal/nvme_tcp.h 00:08:31.132 Processing file include/spdk_internal/utf.h 00:08:31.132 Processing file include/spdk_internal/sgl.h 00:08:31.132 Processing file include/spdk_internal/virtio.h 00:08:31.132 Processing file include/spdk_internal/rdma.h 00:08:31.132 Processing file include/spdk_internal/sock.h 00:08:31.132 Processing file lib/accel/accel_rpc.c 00:08:31.132 Processing file lib/accel/accel_sw.c 00:08:31.132 Processing file lib/accel/accel.c 00:08:31.391 Processing file lib/bdev/bdev.c 00:08:31.391 Processing file lib/bdev/bdev_rpc.c 00:08:31.391 Processing file lib/bdev/bdev_zone.c 00:08:31.391 Processing file lib/bdev/part.c 00:08:31.391 Processing file lib/bdev/scsi_nvme.c 00:08:31.649 Processing file lib/blob/blob_bs_dev.c 00:08:31.649 Processing file lib/blob/blobstore.c 00:08:31.649 Processing file lib/blob/blobstore.h 00:08:31.649 Processing file lib/blob/zeroes.c 00:08:31.649 Processing file lib/blob/request.c 00:08:31.907 Processing file lib/blobfs/tree.c 00:08:31.907 Processing file lib/blobfs/blobfs.c 00:08:31.907 Processing file lib/conf/conf.c 00:08:31.907 Processing file lib/dma/dma.c 00:08:32.166 Processing file lib/env_dpdk/pci_idxd.c 00:08:32.166 Processing file lib/env_dpdk/sigbus_handler.c 00:08:32.166 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:08:32.166 Processing file lib/env_dpdk/pci_ioat.c 00:08:32.166 Processing file lib/env_dpdk/pci_virtio.c 00:08:32.166 Processing file lib/env_dpdk/threads.c 00:08:32.166 Processing file lib/env_dpdk/pci_vmd.c 00:08:32.166 Processing file lib/env_dpdk/pci_event.c 00:08:32.166 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:08:32.166 Processing file lib/env_dpdk/pci_dpdk.c 00:08:32.166 Processing file lib/env_dpdk/init.c 00:08:32.166 Processing file lib/env_dpdk/env.c 00:08:32.166 Processing file lib/env_dpdk/pci.c 00:08:32.166 Processing file lib/env_dpdk/memory.c 00:08:32.424 Processing file lib/event/app.c 00:08:32.424 Processing file lib/event/scheduler_static.c 00:08:32.424 Processing file lib/event/log_rpc.c 00:08:32.424 Processing file lib/event/app_rpc.c 00:08:32.424 Processing file lib/event/reactor.c 00:08:32.991 Processing file lib/ftl/ftl_core.h 00:08:32.991 Processing file lib/ftl/ftl_debug.h 00:08:32.991 Processing file lib/ftl/ftl_band.c 00:08:32.991 Processing file lib/ftl/ftl_reloc.c 00:08:32.991 Processing file lib/ftl/ftl_io.c 00:08:32.991 Processing file lib/ftl/ftl_band.h 00:08:32.991 Processing file lib/ftl/ftl_layout.c 00:08:32.991 Processing file lib/ftl/ftl_p2l.c 00:08:32.991 Processing file lib/ftl/ftl_writer.c 00:08:32.991 Processing file lib/ftl/ftl_trace.c 00:08:32.991 Processing file lib/ftl/ftl_sb.c 00:08:32.991 Processing file lib/ftl/ftl_l2p_flat.c 00:08:32.991 Processing file lib/ftl/ftl_rq.c 00:08:32.991 Processing file lib/ftl/ftl_l2p_cache.c 00:08:32.991 Processing file lib/ftl/ftl_core.c 00:08:32.991 Processing file lib/ftl/ftl_debug.c 00:08:32.991 Processing file lib/ftl/ftl_nv_cache.h 00:08:32.991 Processing file lib/ftl/ftl_band_ops.c 00:08:32.991 Processing file lib/ftl/ftl_writer.h 00:08:32.991 Processing file lib/ftl/ftl_io.h 00:08:32.991 Processing file lib/ftl/ftl_init.c 00:08:32.991 Processing file lib/ftl/ftl_nv_cache_io.h 00:08:32.991 Processing file lib/ftl/ftl_l2p.c 00:08:32.991 Processing file lib/ftl/ftl_nv_cache.c 00:08:32.991 Processing file lib/ftl/base/ftl_base_bdev.c 00:08:32.991 Processing file lib/ftl/base/ftl_base_dev.c 00:08:33.249 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:08:33.249 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:08:33.249 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:08:33.249 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:08:33.249 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:08:33.249 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:08:33.249 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:08:33.249 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:08:33.249 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:08:33.249 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:08:33.249 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:08:33.249 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:08:33.249 Processing file lib/ftl/mngt/ftl_mngt.c 00:08:33.509 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:08:33.509 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:08:33.509 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:08:33.509 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:08:33.509 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:08:33.509 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:08:33.767 Processing file lib/ftl/utils/ftl_property.h 00:08:33.767 Processing file lib/ftl/utils/ftl_mempool.c 00:08:33.767 Processing file lib/ftl/utils/ftl_df.h 00:08:33.767 Processing file lib/ftl/utils/ftl_property.c 00:08:33.767 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:08:33.767 Processing file lib/ftl/utils/ftl_md.c 00:08:33.767 Processing file lib/ftl/utils/ftl_bitmap.c 00:08:33.767 Processing file lib/ftl/utils/ftl_addr_utils.h 00:08:33.767 Processing file lib/ftl/utils/ftl_conf.c 00:08:33.767 Processing file lib/idxd/idxd_user.c 00:08:33.767 Processing file lib/idxd/idxd_internal.h 00:08:33.767 Processing file lib/idxd/idxd.c 00:08:34.026 Processing file lib/init/json_config.c 00:08:34.026 Processing file lib/init/subsystem.c 00:08:34.026 Processing file lib/init/subsystem_rpc.c 00:08:34.026 Processing file lib/init/rpc.c 00:08:34.026 Processing file lib/ioat/ioat.c 00:08:34.026 Processing file lib/ioat/ioat_internal.h 00:08:34.598 Processing file lib/iscsi/iscsi.h 00:08:34.598 Processing file lib/iscsi/conn.c 00:08:34.598 Processing file lib/iscsi/iscsi_rpc.c 00:08:34.598 Processing file lib/iscsi/md5.c 00:08:34.598 Processing file lib/iscsi/tgt_node.c 00:08:34.598 Processing file lib/iscsi/init_grp.c 00:08:34.598 Processing file lib/iscsi/param.c 00:08:34.598 Processing file lib/iscsi/iscsi.c 00:08:34.598 Processing file lib/iscsi/task.h 00:08:34.598 Processing file lib/iscsi/portal_grp.c 00:08:34.598 Processing file lib/iscsi/task.c 00:08:34.598 Processing file lib/iscsi/iscsi_subsystem.c 00:08:34.598 Processing file lib/json/json_write.c 00:08:34.598 Processing file lib/json/json_util.c 00:08:34.598 Processing file lib/json/json_parse.c 00:08:34.865 Processing file lib/jsonrpc/jsonrpc_client.c 00:08:34.865 Processing file lib/jsonrpc/jsonrpc_server.c 00:08:34.865 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:08:34.865 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:08:34.865 Processing file lib/log/log.c 00:08:34.865 Processing file lib/log/log_deprecated.c 00:08:34.865 Processing file lib/log/log_flags.c 00:08:34.865 Processing file lib/lvol/lvol.c 00:08:35.124 Processing file lib/nbd/nbd_rpc.c 00:08:35.124 Processing file lib/nbd/nbd.c 00:08:35.124 Processing file lib/notify/notify_rpc.c 00:08:35.124 Processing file lib/notify/notify.c 00:08:36.059 Processing file lib/nvme/nvme_quirks.c 00:08:36.059 Processing file lib/nvme/nvme_rdma.c 00:08:36.059 Processing file lib/nvme/nvme_poll_group.c 00:08:36.059 Processing file lib/nvme/nvme_transport.c 00:08:36.059 Processing file lib/nvme/nvme_qpair.c 00:08:36.059 Processing file lib/nvme/nvme_cuse.c 00:08:36.059 Processing file lib/nvme/nvme_pcie.c 00:08:36.059 Processing file lib/nvme/nvme_tcp.c 00:08:36.059 Processing file lib/nvme/nvme_ns.c 00:08:36.059 Processing file lib/nvme/nvme_fabric.c 00:08:36.059 Processing file lib/nvme/nvme.c 00:08:36.059 Processing file lib/nvme/nvme_vfio_user.c 00:08:36.059 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:08:36.059 Processing file lib/nvme/nvme_pcie_internal.h 00:08:36.059 Processing file lib/nvme/nvme_ns_cmd.c 00:08:36.059 Processing file lib/nvme/nvme_zns.c 00:08:36.059 Processing file lib/nvme/nvme_discovery.c 00:08:36.060 Processing file lib/nvme/nvme_opal.c 00:08:36.060 Processing file lib/nvme/nvme_io_msg.c 00:08:36.060 Processing file lib/nvme/nvme_ctrlr.c 00:08:36.060 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:08:36.060 Processing file lib/nvme/nvme_internal.h 00:08:36.060 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:08:36.060 Processing file lib/nvme/nvme_pcie_common.c 00:08:36.628 Processing file lib/nvmf/ctrlr_discovery.c 00:08:36.628 Processing file lib/nvmf/subsystem.c 00:08:36.628 Processing file lib/nvmf/nvmf_internal.h 00:08:36.628 Processing file lib/nvmf/transport.c 00:08:36.628 Processing file lib/nvmf/nvmf_rpc.c 00:08:36.628 Processing file lib/nvmf/ctrlr_bdev.c 00:08:36.628 Processing file lib/nvmf/tcp.c 00:08:36.628 Processing file lib/nvmf/nvmf.c 00:08:36.628 Processing file lib/nvmf/ctrlr.c 00:08:36.628 Processing file lib/nvmf/rdma.c 00:08:36.628 Processing file lib/rdma/rdma_verbs.c 00:08:36.628 Processing file lib/rdma/common.c 00:08:36.628 Processing file lib/rpc/rpc.c 00:08:36.887 Processing file lib/scsi/lun.c 00:08:36.887 Processing file lib/scsi/scsi_pr.c 00:08:36.887 Processing file lib/scsi/port.c 00:08:36.887 Processing file lib/scsi/scsi_rpc.c 00:08:36.887 Processing file lib/scsi/scsi.c 00:08:36.887 Processing file lib/scsi/task.c 00:08:36.887 Processing file lib/scsi/scsi_bdev.c 00:08:36.887 Processing file lib/scsi/dev.c 00:08:37.146 Processing file lib/sock/sock_rpc.c 00:08:37.146 Processing file lib/sock/sock.c 00:08:37.146 Processing file lib/thread/thread.c 00:08:37.146 Processing file lib/thread/iobuf.c 00:08:37.404 Processing file lib/trace/trace.c 00:08:37.404 Processing file lib/trace/trace_flags.c 00:08:37.404 Processing file lib/trace/trace_rpc.c 00:08:37.404 Processing file lib/trace_parser/trace.cpp 00:08:37.404 Processing file lib/ut/ut.c 00:08:37.404 Processing file lib/ut_mock/mock.c 00:08:37.971 Processing file lib/util/fd_group.c 00:08:37.971 Processing file lib/util/string.c 00:08:37.971 Processing file lib/util/bit_array.c 00:08:37.971 Processing file lib/util/base64.c 00:08:37.971 Processing file lib/util/file.c 00:08:37.971 Processing file lib/util/crc64.c 00:08:37.971 Processing file lib/util/iov.c 00:08:37.971 Processing file lib/util/crc32_ieee.c 00:08:37.971 Processing file lib/util/pipe.c 00:08:37.971 Processing file lib/util/dif.c 00:08:37.971 Processing file lib/util/strerror_tls.c 00:08:37.971 Processing file lib/util/zipf.c 00:08:37.971 Processing file lib/util/hexlify.c 00:08:37.971 Processing file lib/util/crc32c.c 00:08:37.971 Processing file lib/util/uuid.c 00:08:37.971 Processing file lib/util/crc32.c 00:08:37.971 Processing file lib/util/math.c 00:08:37.971 Processing file lib/util/cpuset.c 00:08:37.971 Processing file lib/util/xor.c 00:08:37.971 Processing file lib/util/crc16.c 00:08:37.971 Processing file lib/util/fd.c 00:08:37.971 Processing file lib/vfio_user/host/vfio_user.c 00:08:37.971 Processing file lib/vfio_user/host/vfio_user_pci.c 00:08:38.229 Processing file lib/vhost/vhost_internal.h 00:08:38.229 Processing file lib/vhost/vhost.c 00:08:38.229 Processing file lib/vhost/vhost_rpc.c 00:08:38.229 Processing file lib/vhost/vhost_scsi.c 00:08:38.229 Processing file lib/vhost/rte_vhost_user.c 00:08:38.229 Processing file lib/vhost/vhost_blk.c 00:08:38.229 Processing file lib/virtio/virtio_vfio_user.c 00:08:38.229 Processing file lib/virtio/virtio.c 00:08:38.229 Processing file lib/virtio/virtio_pci.c 00:08:38.229 Processing file lib/virtio/virtio_vhost_user.c 00:08:38.488 Processing file lib/vmd/led.c 00:08:38.488 Processing file lib/vmd/vmd.c 00:08:38.488 Processing file module/accel/dsa/accel_dsa.c 00:08:38.488 Processing file module/accel/dsa/accel_dsa_rpc.c 00:08:38.488 Processing file module/accel/error/accel_error.c 00:08:38.488 Processing file module/accel/error/accel_error_rpc.c 00:08:38.746 Processing file module/accel/iaa/accel_iaa_rpc.c 00:08:38.746 Processing file module/accel/iaa/accel_iaa.c 00:08:38.746 Processing file module/accel/ioat/accel_ioat.c 00:08:38.746 Processing file module/accel/ioat/accel_ioat_rpc.c 00:08:38.746 Processing file module/bdev/aio/bdev_aio.c 00:08:38.746 Processing file module/bdev/aio/bdev_aio_rpc.c 00:08:39.005 Processing file module/bdev/delay/vbdev_delay.c 00:08:39.005 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:08:39.005 Processing file module/bdev/error/vbdev_error.c 00:08:39.005 Processing file module/bdev/error/vbdev_error_rpc.c 00:08:39.005 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:08:39.005 Processing file module/bdev/ftl/bdev_ftl.c 00:08:39.264 Processing file module/bdev/gpt/gpt.c 00:08:39.264 Processing file module/bdev/gpt/gpt.h 00:08:39.264 Processing file module/bdev/gpt/vbdev_gpt.c 00:08:39.264 Processing file module/bdev/iscsi/bdev_iscsi.c 00:08:39.264 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:08:39.523 Processing file module/bdev/lvol/vbdev_lvol.c 00:08:39.523 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:08:39.523 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:08:39.523 Processing file module/bdev/malloc/bdev_malloc.c 00:08:39.523 Processing file module/bdev/null/bdev_null.c 00:08:39.523 Processing file module/bdev/null/bdev_null_rpc.c 00:08:39.782 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:08:39.782 Processing file module/bdev/nvme/nvme_rpc.c 00:08:39.782 Processing file module/bdev/nvme/bdev_mdns_client.c 00:08:39.782 Processing file module/bdev/nvme/bdev_nvme.c 00:08:39.782 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:08:39.782 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:08:39.782 Processing file module/bdev/nvme/vbdev_opal.c 00:08:40.041 Processing file module/bdev/passthru/vbdev_passthru.c 00:08:40.041 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:08:40.300 Processing file module/bdev/raid/bdev_raid_rpc.c 00:08:40.300 Processing file module/bdev/raid/raid1.c 00:08:40.300 Processing file module/bdev/raid/raid0.c 00:08:40.300 Processing file module/bdev/raid/concat.c 00:08:40.300 Processing file module/bdev/raid/bdev_raid.c 00:08:40.300 Processing file module/bdev/raid/bdev_raid_sb.c 00:08:40.300 Processing file module/bdev/raid/bdev_raid.h 00:08:40.300 Processing file module/bdev/raid/raid5f.c 00:08:40.300 Processing file module/bdev/split/vbdev_split.c 00:08:40.300 Processing file module/bdev/split/vbdev_split_rpc.c 00:08:40.559 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:08:40.559 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:08:40.559 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:08:40.559 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:08:40.559 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:08:40.559 Processing file module/blob/bdev/blob_bdev.c 00:08:40.817 Processing file module/blobfs/bdev/blobfs_bdev.c 00:08:40.817 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:08:40.817 Processing file module/env_dpdk/env_dpdk_rpc.c 00:08:40.817 Processing file module/event/subsystems/accel/accel.c 00:08:40.817 Processing file module/event/subsystems/bdev/bdev.c 00:08:41.078 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:08:41.078 Processing file module/event/subsystems/iobuf/iobuf.c 00:08:41.078 Processing file module/event/subsystems/iscsi/iscsi.c 00:08:41.078 Processing file module/event/subsystems/nbd/nbd.c 00:08:41.078 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:08:41.078 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:08:41.337 Processing file module/event/subsystems/scheduler/scheduler.c 00:08:41.337 Processing file module/event/subsystems/scsi/scsi.c 00:08:41.337 Processing file module/event/subsystems/sock/sock.c 00:08:41.337 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:08:41.596 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:08:41.596 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:08:41.596 Processing file module/event/subsystems/vmd/vmd.c 00:08:41.596 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:08:41.596 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:08:41.856 Processing file module/scheduler/gscheduler/gscheduler.c 00:08:41.856 Processing file module/sock/sock_kernel.h 00:08:41.856 Processing file module/sock/posix/posix.c 00:08:41.856 Writing directory view page. 00:08:41.856 Overall coverage rate: 00:08:41.856 lines......: 39.1% (39241 of 100366 lines) 00:08:41.856 functions..: 42.8% (3585 of 8382 functions) 00:08:42.115 00:08:42.115 00:08:42.115 ===================== 00:08:42.115 All unit tests passed 00:08:42.115 ===================== 00:08:42.115 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:42.115 21:05:04 -- unit/unittest.sh@302 -- # set +x 00:08:42.115 00:08:42.115 00:08:42.115 00:08:42.115 real 3m6.197s 00:08:42.115 user 2m39.808s 00:08:42.115 sys 0m14.748s 00:08:42.115 21:05:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.115 21:05:04 -- common/autotest_common.sh@10 -- # set +x 00:08:42.115 ************************************ 00:08:42.115 END TEST unittest 00:08:42.115 ************************************ 00:08:42.115 21:05:04 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:08:42.115 21:05:04 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:42.115 21:05:04 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:42.115 21:05:04 -- spdk/autotest.sh@173 -- # timing_enter lib 00:08:42.115 21:05:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:42.115 21:05:04 -- common/autotest_common.sh@10 -- # set +x 00:08:42.115 21:05:04 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:42.115 21:05:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:42.115 21:05:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.115 21:05:04 -- common/autotest_common.sh@10 -- # set +x 00:08:42.115 ************************************ 00:08:42.115 START TEST env 00:08:42.115 ************************************ 00:08:42.115 21:05:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:42.115 * Looking for test storage... 00:08:42.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:42.115 21:05:04 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:42.115 21:05:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:42.115 21:05:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.115 21:05:04 -- common/autotest_common.sh@10 -- # set +x 00:08:42.115 ************************************ 00:08:42.115 START TEST env_memory 00:08:42.115 ************************************ 00:08:42.115 21:05:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:42.115 00:08:42.115 00:08:42.115 CUnit - A unit testing framework for C - Version 2.1-3 00:08:42.115 http://cunit.sourceforge.net/ 00:08:42.115 00:08:42.115 00:08:42.115 Suite: memory 00:08:42.115 Test: alloc and free memory map ...[2024-06-07 21:05:04.748927] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:42.115 passed 00:08:42.375 Test: mem map translation ...[2024-06-07 21:05:04.798060] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:42.375 [2024-06-07 21:05:04.798189] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:42.375 [2024-06-07 21:05:04.798319] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:42.375 [2024-06-07 21:05:04.798394] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:42.375 passed 00:08:42.375 Test: mem map registration ...[2024-06-07 21:05:04.884810] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:42.375 [2024-06-07 21:05:04.884938] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:42.375 passed 00:08:42.375 Test: mem map adjacent registrations ...passed 00:08:42.375 00:08:42.375 Run Summary: Type Total Ran Passed Failed Inactive 00:08:42.375 suites 1 1 n/a 0 0 00:08:42.375 tests 4 4 4 0 0 00:08:42.375 asserts 152 152 152 0 n/a 00:08:42.375 00:08:42.375 Elapsed time = 0.296 seconds 00:08:42.375 00:08:42.375 real 0m0.330s 00:08:42.375 user 0m0.310s 00:08:42.375 sys 0m0.020s 00:08:42.375 21:05:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.375 ************************************ 00:08:42.375 END TEST env_memory 00:08:42.375 21:05:05 -- common/autotest_common.sh@10 -- # set +x 00:08:42.375 ************************************ 00:08:42.634 21:05:05 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:42.634 21:05:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:42.634 21:05:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.634 21:05:05 -- common/autotest_common.sh@10 -- # set +x 00:08:42.634 ************************************ 00:08:42.634 START TEST env_vtophys 00:08:42.634 ************************************ 00:08:42.634 21:05:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:42.634 EAL: lib.eal log level changed from notice to debug 00:08:42.634 EAL: Detected lcore 0 as core 0 on socket 0 00:08:42.634 EAL: Detected lcore 1 as core 0 on socket 0 00:08:42.634 EAL: Detected lcore 2 as core 0 on socket 0 00:08:42.634 EAL: Detected lcore 3 as core 0 on socket 0 00:08:42.634 EAL: Detected lcore 4 as core 0 on socket 0 00:08:42.634 EAL: Detected lcore 5 as core 0 on socket 0 00:08:42.634 EAL: Detected lcore 6 as core 0 on socket 0 00:08:42.634 EAL: Detected lcore 7 as core 0 on socket 0 00:08:42.634 EAL: Detected lcore 8 as core 0 on socket 0 00:08:42.634 EAL: Detected lcore 9 as core 0 on socket 0 00:08:42.634 EAL: Maximum logical cores by configuration: 128 00:08:42.634 EAL: Detected CPU lcores: 10 00:08:42.634 EAL: Detected NUMA nodes: 1 00:08:42.634 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:08:42.634 EAL: Checking presence of .so 'librte_eal.so.24' 00:08:42.634 EAL: Checking presence of .so 'librte_eal.so' 00:08:42.634 EAL: Detected static linkage of DPDK 00:08:42.634 EAL: No shared files mode enabled, IPC will be disabled 00:08:42.634 EAL: Selected IOVA mode 'PA' 00:08:42.634 EAL: Probing VFIO support... 00:08:42.634 EAL: IOMMU type 1 (Type 1) is supported 00:08:42.634 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:42.634 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:42.634 EAL: VFIO support initialized 00:08:42.634 EAL: Ask a virtual area of 0x2e000 bytes 00:08:42.634 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:42.634 EAL: Setting up physically contiguous memory... 00:08:42.634 EAL: Setting maximum number of open files to 1048576 00:08:42.634 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:42.634 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:42.634 EAL: Ask a virtual area of 0x61000 bytes 00:08:42.634 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:42.634 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:42.634 EAL: Ask a virtual area of 0x400000000 bytes 00:08:42.634 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:42.634 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:42.634 EAL: Ask a virtual area of 0x61000 bytes 00:08:42.634 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:42.634 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:42.634 EAL: Ask a virtual area of 0x400000000 bytes 00:08:42.634 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:42.634 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:42.634 EAL: Ask a virtual area of 0x61000 bytes 00:08:42.634 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:42.634 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:42.634 EAL: Ask a virtual area of 0x400000000 bytes 00:08:42.634 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:42.634 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:42.634 EAL: Ask a virtual area of 0x61000 bytes 00:08:42.634 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:42.634 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:42.634 EAL: Ask a virtual area of 0x400000000 bytes 00:08:42.634 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:42.634 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:42.635 EAL: Hugepages will be freed exactly as allocated. 00:08:42.635 EAL: No shared files mode enabled, IPC is disabled 00:08:42.635 EAL: No shared files mode enabled, IPC is disabled 00:08:42.635 EAL: TSC frequency is ~2200000 KHz 00:08:42.635 EAL: Main lcore 0 is ready (tid=7f74d369aa40;cpuset=[0]) 00:08:42.635 EAL: Trying to obtain current memory policy. 00:08:42.635 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:42.635 EAL: Restoring previous memory policy: 0 00:08:42.635 EAL: request: mp_malloc_sync 00:08:42.635 EAL: No shared files mode enabled, IPC is disabled 00:08:42.635 EAL: Heap on socket 0 was expanded by 2MB 00:08:42.635 EAL: No shared files mode enabled, IPC is disabled 00:08:42.635 EAL: Mem event callback 'spdk:(nil)' registered 00:08:42.635 00:08:42.635 00:08:42.635 CUnit - A unit testing framework for C - Version 2.1-3 00:08:42.635 http://cunit.sourceforge.net/ 00:08:42.635 00:08:42.635 00:08:42.635 Suite: components_suite 00:08:43.204 Test: vtophys_malloc_test ...passed 00:08:43.204 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:43.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.204 EAL: Restoring previous memory policy: 0 00:08:43.204 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.204 EAL: request: mp_malloc_sync 00:08:43.204 EAL: No shared files mode enabled, IPC is disabled 00:08:43.204 EAL: Heap on socket 0 was expanded by 4MB 00:08:43.204 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.204 EAL: request: mp_malloc_sync 00:08:43.204 EAL: No shared files mode enabled, IPC is disabled 00:08:43.204 EAL: Heap on socket 0 was shrunk by 4MB 00:08:43.204 EAL: Trying to obtain current memory policy. 00:08:43.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.204 EAL: Restoring previous memory policy: 0 00:08:43.204 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.204 EAL: request: mp_malloc_sync 00:08:43.204 EAL: No shared files mode enabled, IPC is disabled 00:08:43.204 EAL: Heap on socket 0 was expanded by 6MB 00:08:43.204 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.204 EAL: request: mp_malloc_sync 00:08:43.204 EAL: No shared files mode enabled, IPC is disabled 00:08:43.204 EAL: Heap on socket 0 was shrunk by 6MB 00:08:43.204 EAL: Trying to obtain current memory policy. 00:08:43.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.204 EAL: Restoring previous memory policy: 0 00:08:43.204 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.204 EAL: request: mp_malloc_sync 00:08:43.204 EAL: No shared files mode enabled, IPC is disabled 00:08:43.204 EAL: Heap on socket 0 was expanded by 10MB 00:08:43.204 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.204 EAL: request: mp_malloc_sync 00:08:43.204 EAL: No shared files mode enabled, IPC is disabled 00:08:43.204 EAL: Heap on socket 0 was shrunk by 10MB 00:08:43.204 EAL: Trying to obtain current memory policy. 00:08:43.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.204 EAL: Restoring previous memory policy: 0 00:08:43.204 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.204 EAL: request: mp_malloc_sync 00:08:43.204 EAL: No shared files mode enabled, IPC is disabled 00:08:43.204 EAL: Heap on socket 0 was expanded by 18MB 00:08:43.204 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.204 EAL: request: mp_malloc_sync 00:08:43.204 EAL: No shared files mode enabled, IPC is disabled 00:08:43.204 EAL: Heap on socket 0 was shrunk by 18MB 00:08:43.204 EAL: Trying to obtain current memory policy. 00:08:43.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.204 EAL: Restoring previous memory policy: 0 00:08:43.204 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.204 EAL: request: mp_malloc_sync 00:08:43.204 EAL: No shared files mode enabled, IPC is disabled 00:08:43.204 EAL: Heap on socket 0 was expanded by 34MB 00:08:43.204 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.204 EAL: request: mp_malloc_sync 00:08:43.204 EAL: No shared files mode enabled, IPC is disabled 00:08:43.204 EAL: Heap on socket 0 was shrunk by 34MB 00:08:43.204 EAL: Trying to obtain current memory policy. 00:08:43.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.204 EAL: Restoring previous memory policy: 0 00:08:43.204 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.204 EAL: request: mp_malloc_sync 00:08:43.204 EAL: No shared files mode enabled, IPC is disabled 00:08:43.204 EAL: Heap on socket 0 was expanded by 66MB 00:08:43.204 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.204 EAL: request: mp_malloc_sync 00:08:43.204 EAL: No shared files mode enabled, IPC is disabled 00:08:43.204 EAL: Heap on socket 0 was shrunk by 66MB 00:08:43.204 EAL: Trying to obtain current memory policy. 00:08:43.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.204 EAL: Restoring previous memory policy: 0 00:08:43.204 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.204 EAL: request: mp_malloc_sync 00:08:43.204 EAL: No shared files mode enabled, IPC is disabled 00:08:43.204 EAL: Heap on socket 0 was expanded by 130MB 00:08:43.464 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.464 EAL: request: mp_malloc_sync 00:08:43.464 EAL: No shared files mode enabled, IPC is disabled 00:08:43.464 EAL: Heap on socket 0 was shrunk by 130MB 00:08:43.464 EAL: Trying to obtain current memory policy. 00:08:43.464 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.464 EAL: Restoring previous memory policy: 0 00:08:43.464 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.464 EAL: request: mp_malloc_sync 00:08:43.464 EAL: No shared files mode enabled, IPC is disabled 00:08:43.464 EAL: Heap on socket 0 was expanded by 258MB 00:08:43.464 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.464 EAL: request: mp_malloc_sync 00:08:43.464 EAL: No shared files mode enabled, IPC is disabled 00:08:43.464 EAL: Heap on socket 0 was shrunk by 258MB 00:08:43.464 EAL: Trying to obtain current memory policy. 00:08:43.464 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.723 EAL: Restoring previous memory policy: 0 00:08:43.723 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.723 EAL: request: mp_malloc_sync 00:08:43.723 EAL: No shared files mode enabled, IPC is disabled 00:08:43.723 EAL: Heap on socket 0 was expanded by 514MB 00:08:43.723 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.982 EAL: request: mp_malloc_sync 00:08:43.982 EAL: No shared files mode enabled, IPC is disabled 00:08:43.982 EAL: Heap on socket 0 was shrunk by 514MB 00:08:43.982 EAL: Trying to obtain current memory policy. 00:08:43.982 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:44.241 EAL: Restoring previous memory policy: 0 00:08:44.241 EAL: Calling mem event callback 'spdk:(nil)' 00:08:44.241 EAL: request: mp_malloc_sync 00:08:44.241 EAL: No shared files mode enabled, IPC is disabled 00:08:44.241 EAL: Heap on socket 0 was expanded by 1026MB 00:08:44.241 EAL: Calling mem event callback 'spdk:(nil)' 00:08:44.500 EAL: request: mp_malloc_sync 00:08:44.500 EAL: No shared files mode enabled, IPC is disabled 00:08:44.500 passed 00:08:44.500 00:08:44.500 Run Summary: Type Total Ran Passed Failed Inactive 00:08:44.500 suites 1 1 n/a 0 0 00:08:44.500 tests 2 2 2 0 0 00:08:44.500 asserts 6457 6457 6457 0 n/a 00:08:44.500 00:08:44.500 Elapsed time = 1.799 seconds 00:08:44.500 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:44.500 EAL: Calling mem event callback 'spdk:(nil)' 00:08:44.500 EAL: request: mp_malloc_sync 00:08:44.500 EAL: No shared files mode enabled, IPC is disabled 00:08:44.500 EAL: Heap on socket 0 was shrunk by 2MB 00:08:44.500 EAL: No shared files mode enabled, IPC is disabled 00:08:44.500 EAL: No shared files mode enabled, IPC is disabled 00:08:44.500 EAL: No shared files mode enabled, IPC is disabled 00:08:44.500 00:08:44.500 real 0m2.089s 00:08:44.500 user 0m1.009s 00:08:44.500 sys 0m0.925s 00:08:44.500 21:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.500 21:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:44.500 ************************************ 00:08:44.500 END TEST env_vtophys 00:08:44.500 ************************************ 00:08:44.759 21:05:07 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:44.759 21:05:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:44.759 21:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.759 21:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:44.759 ************************************ 00:08:44.759 START TEST env_pci 00:08:44.759 ************************************ 00:08:44.759 21:05:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:44.759 00:08:44.759 00:08:44.759 CUnit - A unit testing framework for C - Version 2.1-3 00:08:44.759 http://cunit.sourceforge.net/ 00:08:44.759 00:08:44.759 00:08:44.759 Suite: pci 00:08:44.759 Test: pci_hook ...[2024-06-07 21:05:07.232064] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 116906 has claimed it 00:08:44.759 EAL: Cannot find device (10000:00:01.0) 00:08:44.759 EAL: Failed to attach device on primary process 00:08:44.759 passed 00:08:44.759 00:08:44.759 Run Summary: Type Total Ran Passed Failed Inactive 00:08:44.759 suites 1 1 n/a 0 0 00:08:44.759 tests 1 1 1 0 0 00:08:44.759 asserts 25 25 25 0 n/a 00:08:44.759 00:08:44.759 Elapsed time = 0.006 seconds 00:08:44.759 00:08:44.759 real 0m0.069s 00:08:44.759 user 0m0.027s 00:08:44.759 sys 0m0.042s 00:08:44.759 21:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.759 ************************************ 00:08:44.759 END TEST env_pci 00:08:44.759 21:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:44.759 ************************************ 00:08:44.759 21:05:07 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:44.759 21:05:07 -- env/env.sh@15 -- # uname 00:08:44.759 21:05:07 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:44.759 21:05:07 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:44.759 21:05:07 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:44.760 21:05:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:44.760 21:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.760 21:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:44.760 ************************************ 00:08:44.760 START TEST env_dpdk_post_init 00:08:44.760 ************************************ 00:08:44.760 21:05:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:44.760 EAL: Detected CPU lcores: 10 00:08:44.760 EAL: Detected NUMA nodes: 1 00:08:44.760 EAL: Detected static linkage of DPDK 00:08:44.760 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:44.760 EAL: Selected IOVA mode 'PA' 00:08:44.760 EAL: VFIO support initialized 00:08:45.018 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:45.018 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:08:45.018 Starting DPDK initialization... 00:08:45.018 Starting SPDK post initialization... 00:08:45.018 SPDK NVMe probe 00:08:45.019 Attaching to 0000:00:06.0 00:08:45.019 Attached to 0000:00:06.0 00:08:45.019 Cleaning up... 00:08:45.019 00:08:45.019 real 0m0.254s 00:08:45.019 user 0m0.069s 00:08:45.019 sys 0m0.085s 00:08:45.019 21:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.019 21:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:45.019 ************************************ 00:08:45.019 END TEST env_dpdk_post_init 00:08:45.019 ************************************ 00:08:45.019 21:05:07 -- env/env.sh@26 -- # uname 00:08:45.019 21:05:07 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:45.019 21:05:07 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:45.019 21:05:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:45.019 21:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:45.019 21:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:45.019 ************************************ 00:08:45.019 START TEST env_mem_callbacks 00:08:45.019 ************************************ 00:08:45.019 21:05:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:45.019 EAL: Detected CPU lcores: 10 00:08:45.019 EAL: Detected NUMA nodes: 1 00:08:45.019 EAL: Detected static linkage of DPDK 00:08:45.277 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:45.277 EAL: Selected IOVA mode 'PA' 00:08:45.277 EAL: VFIO support initialized 00:08:45.277 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:45.277 00:08:45.277 00:08:45.277 CUnit - A unit testing framework for C - Version 2.1-3 00:08:45.277 http://cunit.sourceforge.net/ 00:08:45.277 00:08:45.277 00:08:45.277 Suite: memory 00:08:45.277 Test: test ... 00:08:45.277 register 0x200000200000 2097152 00:08:45.277 malloc 3145728 00:08:45.277 register 0x200000400000 4194304 00:08:45.277 buf 0x200000500000 len 3145728 PASSED 00:08:45.277 malloc 64 00:08:45.277 buf 0x2000004fff40 len 64 PASSED 00:08:45.277 malloc 4194304 00:08:45.277 register 0x200000800000 6291456 00:08:45.277 buf 0x200000a00000 len 4194304 PASSED 00:08:45.277 free 0x200000500000 3145728 00:08:45.277 free 0x2000004fff40 64 00:08:45.277 unregister 0x200000400000 4194304 PASSED 00:08:45.277 free 0x200000a00000 4194304 00:08:45.277 unregister 0x200000800000 6291456 PASSED 00:08:45.277 malloc 8388608 00:08:45.277 register 0x200000400000 10485760 00:08:45.277 buf 0x200000600000 len 8388608 PASSED 00:08:45.277 free 0x200000600000 8388608 00:08:45.277 unregister 0x200000400000 10485760 PASSED 00:08:45.277 passed 00:08:45.277 00:08:45.277 Run Summary: Type Total Ran Passed Failed Inactive 00:08:45.277 suites 1 1 n/a 0 0 00:08:45.277 tests 1 1 1 0 0 00:08:45.277 asserts 15 15 15 0 n/a 00:08:45.277 00:08:45.277 Elapsed time = 0.008 seconds 00:08:45.277 00:08:45.277 real 0m0.216s 00:08:45.277 user 0m0.050s 00:08:45.277 sys 0m0.066s 00:08:45.277 21:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.278 ************************************ 00:08:45.278 END TEST env_mem_callbacks 00:08:45.278 ************************************ 00:08:45.278 21:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:45.278 00:08:45.278 real 0m3.286s 00:08:45.278 user 0m1.647s 00:08:45.278 sys 0m1.270s 00:08:45.278 21:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.278 21:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:45.278 ************************************ 00:08:45.278 END TEST env 00:08:45.278 ************************************ 00:08:45.278 21:05:07 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:45.278 21:05:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:45.278 21:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:45.278 21:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:45.278 ************************************ 00:08:45.278 START TEST rpc 00:08:45.278 ************************************ 00:08:45.278 21:05:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:45.536 * Looking for test storage... 00:08:45.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:45.536 21:05:08 -- rpc/rpc.sh@65 -- # spdk_pid=117027 00:08:45.536 21:05:08 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:45.536 21:05:08 -- rpc/rpc.sh@67 -- # waitforlisten 117027 00:08:45.536 21:05:08 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:45.536 21:05:08 -- common/autotest_common.sh@819 -- # '[' -z 117027 ']' 00:08:45.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.536 21:05:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.536 21:05:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:45.536 21:05:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.536 21:05:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:45.536 21:05:08 -- common/autotest_common.sh@10 -- # set +x 00:08:45.536 [2024-06-07 21:05:08.097014] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:45.536 [2024-06-07 21:05:08.097262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117027 ] 00:08:45.795 [2024-06-07 21:05:08.263071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.795 [2024-06-07 21:05:08.347414] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:45.795 [2024-06-07 21:05:08.347689] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:45.795 [2024-06-07 21:05:08.347728] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 117027' to capture a snapshot of events at runtime. 00:08:45.795 [2024-06-07 21:05:08.347765] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid117027 for offline analysis/debug. 00:08:45.795 [2024-06-07 21:05:08.347884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.364 21:05:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:46.364 21:05:09 -- common/autotest_common.sh@852 -- # return 0 00:08:46.364 21:05:09 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:46.364 21:05:09 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:46.364 21:05:09 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:46.364 21:05:09 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:46.364 21:05:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:46.364 21:05:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:46.364 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.623 ************************************ 00:08:46.623 START TEST rpc_integrity 00:08:46.623 ************************************ 00:08:46.623 21:05:09 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:46.623 21:05:09 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:46.623 21:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.623 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.623 21:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.623 21:05:09 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:46.623 21:05:09 -- rpc/rpc.sh@13 -- # jq length 00:08:46.623 21:05:09 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:46.623 21:05:09 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:46.623 21:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.623 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.623 21:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.623 21:05:09 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:46.623 21:05:09 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:46.623 21:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.623 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.623 21:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.623 21:05:09 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:46.623 { 00:08:46.623 "name": "Malloc0", 00:08:46.623 "aliases": [ 00:08:46.623 "622314c1-e1b1-4da2-8958-b54beaf5c779" 00:08:46.623 ], 00:08:46.623 "product_name": "Malloc disk", 00:08:46.623 "block_size": 512, 00:08:46.623 "num_blocks": 16384, 00:08:46.623 "uuid": "622314c1-e1b1-4da2-8958-b54beaf5c779", 00:08:46.623 "assigned_rate_limits": { 00:08:46.624 "rw_ios_per_sec": 0, 00:08:46.624 "rw_mbytes_per_sec": 0, 00:08:46.624 "r_mbytes_per_sec": 0, 00:08:46.624 "w_mbytes_per_sec": 0 00:08:46.624 }, 00:08:46.624 "claimed": false, 00:08:46.624 "zoned": false, 00:08:46.624 "supported_io_types": { 00:08:46.624 "read": true, 00:08:46.624 "write": true, 00:08:46.624 "unmap": true, 00:08:46.624 "write_zeroes": true, 00:08:46.624 "flush": true, 00:08:46.624 "reset": true, 00:08:46.624 "compare": false, 00:08:46.624 "compare_and_write": false, 00:08:46.624 "abort": true, 00:08:46.624 "nvme_admin": false, 00:08:46.624 "nvme_io": false 00:08:46.624 }, 00:08:46.624 "memory_domains": [ 00:08:46.624 { 00:08:46.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.624 "dma_device_type": 2 00:08:46.624 } 00:08:46.624 ], 00:08:46.624 "driver_specific": {} 00:08:46.624 } 00:08:46.624 ]' 00:08:46.624 21:05:09 -- rpc/rpc.sh@17 -- # jq length 00:08:46.624 21:05:09 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:46.624 21:05:09 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:46.624 21:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.624 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.624 [2024-06-07 21:05:09.200411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:46.624 [2024-06-07 21:05:09.200539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.624 [2024-06-07 21:05:09.200589] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:46.624 [2024-06-07 21:05:09.200613] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.624 [2024-06-07 21:05:09.203250] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.624 [2024-06-07 21:05:09.203344] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:46.624 Passthru0 00:08:46.624 21:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.624 21:05:09 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:46.624 21:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.624 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.624 21:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.624 21:05:09 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:46.624 { 00:08:46.624 "name": "Malloc0", 00:08:46.624 "aliases": [ 00:08:46.624 "622314c1-e1b1-4da2-8958-b54beaf5c779" 00:08:46.624 ], 00:08:46.624 "product_name": "Malloc disk", 00:08:46.624 "block_size": 512, 00:08:46.624 "num_blocks": 16384, 00:08:46.624 "uuid": "622314c1-e1b1-4da2-8958-b54beaf5c779", 00:08:46.624 "assigned_rate_limits": { 00:08:46.624 "rw_ios_per_sec": 0, 00:08:46.624 "rw_mbytes_per_sec": 0, 00:08:46.624 "r_mbytes_per_sec": 0, 00:08:46.624 "w_mbytes_per_sec": 0 00:08:46.624 }, 00:08:46.624 "claimed": true, 00:08:46.624 "claim_type": "exclusive_write", 00:08:46.624 "zoned": false, 00:08:46.624 "supported_io_types": { 00:08:46.624 "read": true, 00:08:46.624 "write": true, 00:08:46.624 "unmap": true, 00:08:46.624 "write_zeroes": true, 00:08:46.624 "flush": true, 00:08:46.624 "reset": true, 00:08:46.624 "compare": false, 00:08:46.624 "compare_and_write": false, 00:08:46.624 "abort": true, 00:08:46.624 "nvme_admin": false, 00:08:46.624 "nvme_io": false 00:08:46.624 }, 00:08:46.624 "memory_domains": [ 00:08:46.624 { 00:08:46.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.624 "dma_device_type": 2 00:08:46.624 } 00:08:46.624 ], 00:08:46.624 "driver_specific": {} 00:08:46.624 }, 00:08:46.624 { 00:08:46.624 "name": "Passthru0", 00:08:46.624 "aliases": [ 00:08:46.624 "9021202f-f472-5650-96e7-75b58e3ca468" 00:08:46.624 ], 00:08:46.624 "product_name": "passthru", 00:08:46.624 "block_size": 512, 00:08:46.624 "num_blocks": 16384, 00:08:46.624 "uuid": "9021202f-f472-5650-96e7-75b58e3ca468", 00:08:46.624 "assigned_rate_limits": { 00:08:46.624 "rw_ios_per_sec": 0, 00:08:46.624 "rw_mbytes_per_sec": 0, 00:08:46.624 "r_mbytes_per_sec": 0, 00:08:46.624 "w_mbytes_per_sec": 0 00:08:46.624 }, 00:08:46.624 "claimed": false, 00:08:46.624 "zoned": false, 00:08:46.624 "supported_io_types": { 00:08:46.624 "read": true, 00:08:46.624 "write": true, 00:08:46.624 "unmap": true, 00:08:46.624 "write_zeroes": true, 00:08:46.624 "flush": true, 00:08:46.624 "reset": true, 00:08:46.624 "compare": false, 00:08:46.624 "compare_and_write": false, 00:08:46.624 "abort": true, 00:08:46.624 "nvme_admin": false, 00:08:46.624 "nvme_io": false 00:08:46.624 }, 00:08:46.624 "memory_domains": [ 00:08:46.624 { 00:08:46.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.624 "dma_device_type": 2 00:08:46.624 } 00:08:46.624 ], 00:08:46.624 "driver_specific": { 00:08:46.624 "passthru": { 00:08:46.624 "name": "Passthru0", 00:08:46.624 "base_bdev_name": "Malloc0" 00:08:46.624 } 00:08:46.624 } 00:08:46.624 } 00:08:46.624 ]' 00:08:46.624 21:05:09 -- rpc/rpc.sh@21 -- # jq length 00:08:46.624 21:05:09 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:46.624 21:05:09 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:46.624 21:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.624 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.624 21:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.624 21:05:09 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:46.624 21:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.624 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.883 21:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.883 21:05:09 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:46.883 21:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.883 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.883 21:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.883 21:05:09 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:46.883 21:05:09 -- rpc/rpc.sh@26 -- # jq length 00:08:46.883 21:05:09 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:46.883 00:08:46.883 real 0m0.324s 00:08:46.883 user 0m0.233s 00:08:46.883 sys 0m0.025s 00:08:46.883 21:05:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.883 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.883 ************************************ 00:08:46.883 END TEST rpc_integrity 00:08:46.883 ************************************ 00:08:46.883 21:05:09 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:46.883 21:05:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:46.883 21:05:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:46.883 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.883 ************************************ 00:08:46.883 START TEST rpc_plugins 00:08:46.883 ************************************ 00:08:46.883 21:05:09 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:08:46.883 21:05:09 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:46.883 21:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.883 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.883 21:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.883 21:05:09 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:46.883 21:05:09 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:46.883 21:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.883 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.883 21:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.883 21:05:09 -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:46.883 { 00:08:46.883 "name": "Malloc1", 00:08:46.883 "aliases": [ 00:08:46.883 "1973bf52-dda2-40ca-b3bc-c78d67917bc3" 00:08:46.883 ], 00:08:46.883 "product_name": "Malloc disk", 00:08:46.883 "block_size": 4096, 00:08:46.883 "num_blocks": 256, 00:08:46.883 "uuid": "1973bf52-dda2-40ca-b3bc-c78d67917bc3", 00:08:46.883 "assigned_rate_limits": { 00:08:46.883 "rw_ios_per_sec": 0, 00:08:46.883 "rw_mbytes_per_sec": 0, 00:08:46.883 "r_mbytes_per_sec": 0, 00:08:46.883 "w_mbytes_per_sec": 0 00:08:46.883 }, 00:08:46.883 "claimed": false, 00:08:46.883 "zoned": false, 00:08:46.883 "supported_io_types": { 00:08:46.883 "read": true, 00:08:46.883 "write": true, 00:08:46.883 "unmap": true, 00:08:46.883 "write_zeroes": true, 00:08:46.883 "flush": true, 00:08:46.883 "reset": true, 00:08:46.883 "compare": false, 00:08:46.883 "compare_and_write": false, 00:08:46.883 "abort": true, 00:08:46.883 "nvme_admin": false, 00:08:46.883 "nvme_io": false 00:08:46.883 }, 00:08:46.883 "memory_domains": [ 00:08:46.883 { 00:08:46.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.883 "dma_device_type": 2 00:08:46.883 } 00:08:46.883 ], 00:08:46.883 "driver_specific": {} 00:08:46.883 } 00:08:46.883 ]' 00:08:46.883 21:05:09 -- rpc/rpc.sh@32 -- # jq length 00:08:46.883 21:05:09 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:46.883 21:05:09 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:46.883 21:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.883 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.883 21:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.883 21:05:09 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:46.883 21:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.883 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.883 21:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.883 21:05:09 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:46.883 21:05:09 -- rpc/rpc.sh@36 -- # jq length 00:08:47.142 21:05:09 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:47.142 00:08:47.142 real 0m0.156s 00:08:47.142 user 0m0.120s 00:08:47.142 sys 0m0.004s 00:08:47.142 21:05:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.142 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.142 ************************************ 00:08:47.142 END TEST rpc_plugins 00:08:47.142 ************************************ 00:08:47.142 21:05:09 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:47.142 21:05:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:47.142 21:05:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:47.142 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.142 ************************************ 00:08:47.142 START TEST rpc_trace_cmd_test 00:08:47.142 ************************************ 00:08:47.142 21:05:09 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:08:47.142 21:05:09 -- rpc/rpc.sh@40 -- # local info 00:08:47.142 21:05:09 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:47.142 21:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.142 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.142 21:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.142 21:05:09 -- rpc/rpc.sh@42 -- # info='{ 00:08:47.142 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid117027", 00:08:47.142 "tpoint_group_mask": "0x8", 00:08:47.142 "iscsi_conn": { 00:08:47.142 "mask": "0x2", 00:08:47.142 "tpoint_mask": "0x0" 00:08:47.142 }, 00:08:47.142 "scsi": { 00:08:47.142 "mask": "0x4", 00:08:47.142 "tpoint_mask": "0x0" 00:08:47.142 }, 00:08:47.142 "bdev": { 00:08:47.142 "mask": "0x8", 00:08:47.142 "tpoint_mask": "0xffffffffffffffff" 00:08:47.142 }, 00:08:47.142 "nvmf_rdma": { 00:08:47.142 "mask": "0x10", 00:08:47.142 "tpoint_mask": "0x0" 00:08:47.142 }, 00:08:47.142 "nvmf_tcp": { 00:08:47.142 "mask": "0x20", 00:08:47.142 "tpoint_mask": "0x0" 00:08:47.142 }, 00:08:47.142 "ftl": { 00:08:47.142 "mask": "0x40", 00:08:47.142 "tpoint_mask": "0x0" 00:08:47.142 }, 00:08:47.142 "blobfs": { 00:08:47.142 "mask": "0x80", 00:08:47.142 "tpoint_mask": "0x0" 00:08:47.142 }, 00:08:47.142 "dsa": { 00:08:47.142 "mask": "0x200", 00:08:47.142 "tpoint_mask": "0x0" 00:08:47.142 }, 00:08:47.142 "thread": { 00:08:47.142 "mask": "0x400", 00:08:47.142 "tpoint_mask": "0x0" 00:08:47.142 }, 00:08:47.142 "nvme_pcie": { 00:08:47.142 "mask": "0x800", 00:08:47.142 "tpoint_mask": "0x0" 00:08:47.142 }, 00:08:47.142 "iaa": { 00:08:47.142 "mask": "0x1000", 00:08:47.142 "tpoint_mask": "0x0" 00:08:47.142 }, 00:08:47.142 "nvme_tcp": { 00:08:47.142 "mask": "0x2000", 00:08:47.142 "tpoint_mask": "0x0" 00:08:47.142 }, 00:08:47.142 "bdev_nvme": { 00:08:47.142 "mask": "0x4000", 00:08:47.142 "tpoint_mask": "0x0" 00:08:47.142 } 00:08:47.142 }' 00:08:47.142 21:05:09 -- rpc/rpc.sh@43 -- # jq length 00:08:47.142 21:05:09 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:08:47.142 21:05:09 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:47.142 21:05:09 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:47.142 21:05:09 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:47.142 21:05:09 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:47.142 21:05:09 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:47.401 21:05:09 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:47.401 21:05:09 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:47.401 21:05:09 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:47.401 00:08:47.401 real 0m0.309s 00:08:47.401 user 0m0.280s 00:08:47.401 sys 0m0.021s 00:08:47.401 21:05:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.401 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.401 ************************************ 00:08:47.401 END TEST rpc_trace_cmd_test 00:08:47.401 ************************************ 00:08:47.401 21:05:09 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:47.401 21:05:09 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:47.401 21:05:09 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:47.401 21:05:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:47.401 21:05:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:47.401 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.401 ************************************ 00:08:47.401 START TEST rpc_daemon_integrity 00:08:47.401 ************************************ 00:08:47.401 21:05:09 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:47.401 21:05:09 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:47.401 21:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.401 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.401 21:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.401 21:05:09 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:47.401 21:05:09 -- rpc/rpc.sh@13 -- # jq length 00:08:47.401 21:05:10 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:47.401 21:05:10 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:47.401 21:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.401 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:47.401 21:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.401 21:05:10 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:47.401 21:05:10 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:47.401 21:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.401 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:47.401 21:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.401 21:05:10 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:47.401 { 00:08:47.401 "name": "Malloc2", 00:08:47.401 "aliases": [ 00:08:47.401 "3b562ec1-4eed-495f-a359-a31700f76520" 00:08:47.401 ], 00:08:47.401 "product_name": "Malloc disk", 00:08:47.401 "block_size": 512, 00:08:47.401 "num_blocks": 16384, 00:08:47.401 "uuid": "3b562ec1-4eed-495f-a359-a31700f76520", 00:08:47.401 "assigned_rate_limits": { 00:08:47.401 "rw_ios_per_sec": 0, 00:08:47.401 "rw_mbytes_per_sec": 0, 00:08:47.401 "r_mbytes_per_sec": 0, 00:08:47.401 "w_mbytes_per_sec": 0 00:08:47.401 }, 00:08:47.401 "claimed": false, 00:08:47.401 "zoned": false, 00:08:47.401 "supported_io_types": { 00:08:47.401 "read": true, 00:08:47.401 "write": true, 00:08:47.401 "unmap": true, 00:08:47.401 "write_zeroes": true, 00:08:47.401 "flush": true, 00:08:47.401 "reset": true, 00:08:47.401 "compare": false, 00:08:47.401 "compare_and_write": false, 00:08:47.401 "abort": true, 00:08:47.401 "nvme_admin": false, 00:08:47.402 "nvme_io": false 00:08:47.402 }, 00:08:47.402 "memory_domains": [ 00:08:47.402 { 00:08:47.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.402 "dma_device_type": 2 00:08:47.402 } 00:08:47.402 ], 00:08:47.402 "driver_specific": {} 00:08:47.402 } 00:08:47.402 ]' 00:08:47.402 21:05:10 -- rpc/rpc.sh@17 -- # jq length 00:08:47.661 21:05:10 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:47.661 21:05:10 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:47.661 21:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.661 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:47.661 [2024-06-07 21:05:10.134378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:47.661 [2024-06-07 21:05:10.134481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.661 [2024-06-07 21:05:10.134523] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:47.661 [2024-06-07 21:05:10.134545] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.661 [2024-06-07 21:05:10.136976] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.661 [2024-06-07 21:05:10.137062] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:47.661 Passthru0 00:08:47.661 21:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.661 21:05:10 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:47.661 21:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.661 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:47.661 21:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.661 21:05:10 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:47.661 { 00:08:47.661 "name": "Malloc2", 00:08:47.661 "aliases": [ 00:08:47.661 "3b562ec1-4eed-495f-a359-a31700f76520" 00:08:47.661 ], 00:08:47.661 "product_name": "Malloc disk", 00:08:47.661 "block_size": 512, 00:08:47.661 "num_blocks": 16384, 00:08:47.661 "uuid": "3b562ec1-4eed-495f-a359-a31700f76520", 00:08:47.661 "assigned_rate_limits": { 00:08:47.661 "rw_ios_per_sec": 0, 00:08:47.661 "rw_mbytes_per_sec": 0, 00:08:47.661 "r_mbytes_per_sec": 0, 00:08:47.661 "w_mbytes_per_sec": 0 00:08:47.661 }, 00:08:47.661 "claimed": true, 00:08:47.661 "claim_type": "exclusive_write", 00:08:47.661 "zoned": false, 00:08:47.661 "supported_io_types": { 00:08:47.661 "read": true, 00:08:47.661 "write": true, 00:08:47.661 "unmap": true, 00:08:47.661 "write_zeroes": true, 00:08:47.661 "flush": true, 00:08:47.661 "reset": true, 00:08:47.661 "compare": false, 00:08:47.661 "compare_and_write": false, 00:08:47.661 "abort": true, 00:08:47.661 "nvme_admin": false, 00:08:47.661 "nvme_io": false 00:08:47.661 }, 00:08:47.661 "memory_domains": [ 00:08:47.661 { 00:08:47.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.661 "dma_device_type": 2 00:08:47.661 } 00:08:47.661 ], 00:08:47.661 "driver_specific": {} 00:08:47.661 }, 00:08:47.661 { 00:08:47.661 "name": "Passthru0", 00:08:47.661 "aliases": [ 00:08:47.661 "47758d02-4675-5878-ad48-d23c23d05ccd" 00:08:47.661 ], 00:08:47.661 "product_name": "passthru", 00:08:47.661 "block_size": 512, 00:08:47.661 "num_blocks": 16384, 00:08:47.661 "uuid": "47758d02-4675-5878-ad48-d23c23d05ccd", 00:08:47.661 "assigned_rate_limits": { 00:08:47.661 "rw_ios_per_sec": 0, 00:08:47.661 "rw_mbytes_per_sec": 0, 00:08:47.661 "r_mbytes_per_sec": 0, 00:08:47.661 "w_mbytes_per_sec": 0 00:08:47.661 }, 00:08:47.661 "claimed": false, 00:08:47.661 "zoned": false, 00:08:47.661 "supported_io_types": { 00:08:47.661 "read": true, 00:08:47.661 "write": true, 00:08:47.661 "unmap": true, 00:08:47.661 "write_zeroes": true, 00:08:47.661 "flush": true, 00:08:47.661 "reset": true, 00:08:47.661 "compare": false, 00:08:47.661 "compare_and_write": false, 00:08:47.661 "abort": true, 00:08:47.661 "nvme_admin": false, 00:08:47.661 "nvme_io": false 00:08:47.661 }, 00:08:47.661 "memory_domains": [ 00:08:47.661 { 00:08:47.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.661 "dma_device_type": 2 00:08:47.661 } 00:08:47.661 ], 00:08:47.661 "driver_specific": { 00:08:47.661 "passthru": { 00:08:47.661 "name": "Passthru0", 00:08:47.661 "base_bdev_name": "Malloc2" 00:08:47.661 } 00:08:47.661 } 00:08:47.661 } 00:08:47.661 ]' 00:08:47.661 21:05:10 -- rpc/rpc.sh@21 -- # jq length 00:08:47.661 21:05:10 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:47.661 21:05:10 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:47.661 21:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.661 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:47.661 21:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.661 21:05:10 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:47.661 21:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.661 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:47.661 21:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.661 21:05:10 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:47.661 21:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.661 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:47.661 21:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.661 21:05:10 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:47.661 21:05:10 -- rpc/rpc.sh@26 -- # jq length 00:08:47.661 21:05:10 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:47.661 00:08:47.661 real 0m0.319s 00:08:47.661 user 0m0.227s 00:08:47.661 sys 0m0.023s 00:08:47.661 21:05:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.661 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:47.661 ************************************ 00:08:47.661 END TEST rpc_daemon_integrity 00:08:47.661 ************************************ 00:08:47.920 21:05:10 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:47.920 21:05:10 -- rpc/rpc.sh@84 -- # killprocess 117027 00:08:47.920 21:05:10 -- common/autotest_common.sh@926 -- # '[' -z 117027 ']' 00:08:47.920 21:05:10 -- common/autotest_common.sh@930 -- # kill -0 117027 00:08:47.920 21:05:10 -- common/autotest_common.sh@931 -- # uname 00:08:47.920 21:05:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:47.920 21:05:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117027 00:08:47.920 21:05:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:47.920 killing process with pid 117027 00:08:47.920 21:05:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:47.920 21:05:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117027' 00:08:47.920 21:05:10 -- common/autotest_common.sh@945 -- # kill 117027 00:08:47.920 21:05:10 -- common/autotest_common.sh@950 -- # wait 117027 00:08:48.179 00:08:48.179 real 0m2.852s 00:08:48.179 user 0m3.823s 00:08:48.179 sys 0m0.577s 00:08:48.179 21:05:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.179 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:48.179 ************************************ 00:08:48.179 END TEST rpc 00:08:48.179 ************************************ 00:08:48.179 21:05:10 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:48.179 21:05:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:48.179 21:05:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:48.179 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:48.179 ************************************ 00:08:48.179 START TEST rpc_client 00:08:48.179 ************************************ 00:08:48.179 21:05:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:48.438 * Looking for test storage... 00:08:48.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:48.438 21:05:10 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:48.438 OK 00:08:48.438 21:05:10 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:48.438 00:08:48.438 real 0m0.121s 00:08:48.438 user 0m0.065s 00:08:48.438 sys 0m0.065s 00:08:48.438 21:05:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.438 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:48.438 ************************************ 00:08:48.438 END TEST rpc_client 00:08:48.438 ************************************ 00:08:48.438 21:05:10 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:48.438 21:05:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:48.438 21:05:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:48.438 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:48.438 ************************************ 00:08:48.438 START TEST json_config 00:08:48.438 ************************************ 00:08:48.438 21:05:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:48.438 21:05:11 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:48.438 21:05:11 -- nvmf/common.sh@7 -- # uname -s 00:08:48.438 21:05:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.438 21:05:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.438 21:05:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.438 21:05:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.438 21:05:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.438 21:05:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.438 21:05:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.438 21:05:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.438 21:05:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.438 21:05:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.438 21:05:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:081e0461-2a90-4f89-82ff-b04e5f31a55c 00:08:48.438 21:05:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=081e0461-2a90-4f89-82ff-b04e5f31a55c 00:08:48.438 21:05:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.438 21:05:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.438 21:05:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:48.438 21:05:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:48.438 21:05:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.438 21:05:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.438 21:05:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.438 21:05:11 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:48.439 21:05:11 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:48.439 21:05:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:48.439 21:05:11 -- paths/export.sh@5 -- # export PATH 00:08:48.439 21:05:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:48.439 21:05:11 -- nvmf/common.sh@46 -- # : 0 00:08:48.439 21:05:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:48.439 21:05:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:48.439 21:05:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:48.439 21:05:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.439 21:05:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.439 21:05:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:48.439 21:05:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:48.439 21:05:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:48.439 21:05:11 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:08:48.439 21:05:11 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:08:48.439 21:05:11 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:08:48.439 21:05:11 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:48.439 21:05:11 -- json_config/json_config.sh@30 -- # app_pid=([target]="" [initiator]="") 00:08:48.439 21:05:11 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:08:48.439 21:05:11 -- json_config/json_config.sh@31 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:08:48.439 21:05:11 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:08:48.439 21:05:11 -- json_config/json_config.sh@32 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:08:48.439 21:05:11 -- json_config/json_config.sh@32 -- # declare -A app_params 00:08:48.439 21:05:11 -- json_config/json_config.sh@33 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:08:48.439 21:05:11 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:08:48.439 21:05:11 -- json_config/json_config.sh@43 -- # last_event_id=0 00:08:48.439 21:05:11 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:48.439 21:05:11 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:08:48.439 INFO: JSON configuration test init 00:08:48.439 21:05:11 -- json_config/json_config.sh@420 -- # json_config_test_init 00:08:48.439 21:05:11 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:08:48.439 21:05:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:48.439 21:05:11 -- common/autotest_common.sh@10 -- # set +x 00:08:48.439 21:05:11 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:08:48.439 21:05:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:48.439 21:05:11 -- common/autotest_common.sh@10 -- # set +x 00:08:48.439 21:05:11 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:08:48.439 21:05:11 -- json_config/json_config.sh@98 -- # local app=target 00:08:48.439 21:05:11 -- json_config/json_config.sh@99 -- # shift 00:08:48.439 21:05:11 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:48.439 21:05:11 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:48.439 21:05:11 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:48.439 21:05:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:48.439 21:05:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:48.439 21:05:11 -- json_config/json_config.sh@111 -- # app_pid[$app]=117295 00:08:48.439 21:05:11 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:48.439 21:05:11 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:48.439 Waiting for target to run... 00:08:48.439 21:05:11 -- json_config/json_config.sh@114 -- # waitforlisten 117295 /var/tmp/spdk_tgt.sock 00:08:48.439 21:05:11 -- common/autotest_common.sh@819 -- # '[' -z 117295 ']' 00:08:48.439 21:05:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:48.439 21:05:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:48.439 21:05:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:48.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:48.439 21:05:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:48.439 21:05:11 -- common/autotest_common.sh@10 -- # set +x 00:08:48.697 [2024-06-07 21:05:11.166759] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:48.697 [2024-06-07 21:05:11.167012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117295 ] 00:08:48.956 [2024-06-07 21:05:11.627887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.214 [2024-06-07 21:05:11.690692] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:49.214 [2024-06-07 21:05:11.690937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.472 00:08:49.472 21:05:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:49.472 21:05:12 -- common/autotest_common.sh@852 -- # return 0 00:08:49.472 21:05:12 -- json_config/json_config.sh@115 -- # echo '' 00:08:49.472 21:05:12 -- json_config/json_config.sh@322 -- # create_accel_config 00:08:49.472 21:05:12 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:08:49.472 21:05:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:49.472 21:05:12 -- common/autotest_common.sh@10 -- # set +x 00:08:49.472 21:05:12 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:08:49.472 21:05:12 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:08:49.472 21:05:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:49.472 21:05:12 -- common/autotest_common.sh@10 -- # set +x 00:08:49.730 21:05:12 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:49.730 21:05:12 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:08:49.730 21:05:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:49.989 21:05:12 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:08:49.989 21:05:12 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:08:49.989 21:05:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:49.989 21:05:12 -- common/autotest_common.sh@10 -- # set +x 00:08:49.989 21:05:12 -- json_config/json_config.sh@48 -- # local ret=0 00:08:49.989 21:05:12 -- json_config/json_config.sh@49 -- # enabled_types=("bdev_register" "bdev_unregister") 00:08:49.989 21:05:12 -- json_config/json_config.sh@49 -- # local enabled_types 00:08:49.989 21:05:12 -- json_config/json_config.sh@51 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:08:49.989 21:05:12 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:49.989 21:05:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:49.989 21:05:12 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:50.248 21:05:12 -- json_config/json_config.sh@51 -- # local get_types 00:08:50.248 21:05:12 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:08:50.248 21:05:12 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:08:50.248 21:05:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:50.248 21:05:12 -- common/autotest_common.sh@10 -- # set +x 00:08:50.248 21:05:12 -- json_config/json_config.sh@58 -- # return 0 00:08:50.248 21:05:12 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:08:50.248 21:05:12 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:08:50.248 21:05:12 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:08:50.248 21:05:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:50.248 21:05:12 -- common/autotest_common.sh@10 -- # set +x 00:08:50.248 21:05:12 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:08:50.248 21:05:12 -- json_config/json_config.sh@160 -- # local expected_notifications 00:08:50.248 21:05:12 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:08:50.248 21:05:12 -- json_config/json_config.sh@164 -- # get_notifications 00:08:50.248 21:05:12 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:50.248 21:05:12 -- json_config/json_config.sh@64 -- # IFS=: 00:08:50.248 21:05:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:50.248 21:05:12 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:50.248 21:05:12 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:50.248 21:05:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:50.507 21:05:13 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:50.507 21:05:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:50.507 21:05:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:50.507 21:05:13 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:08:50.507 21:05:13 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:08:50.507 21:05:13 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:08:50.507 21:05:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:08:50.765 Nvme0n1p0 Nvme0n1p1 00:08:50.765 21:05:13 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:08:50.765 21:05:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:08:51.024 [2024-06-07 21:05:13.555843] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:51.024 [2024-06-07 21:05:13.555985] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:51.024 00:08:51.024 21:05:13 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:08:51.024 21:05:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:08:51.283 Malloc3 00:08:51.283 21:05:13 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:51.283 21:05:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:51.283 [2024-06-07 21:05:13.931987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:51.283 [2024-06-07 21:05:13.932128] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.283 [2024-06-07 21:05:13.932193] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:51.283 [2024-06-07 21:05:13.932245] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.283 [2024-06-07 21:05:13.934835] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.283 [2024-06-07 21:05:13.934915] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:51.283 PTBdevFromMalloc3 00:08:51.283 21:05:13 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:08:51.283 21:05:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:08:51.546 Null0 00:08:51.546 21:05:14 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:08:51.546 21:05:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:08:51.811 Malloc0 00:08:51.811 21:05:14 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:08:51.811 21:05:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:08:52.069 Malloc1 00:08:52.069 21:05:14 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:08:52.069 21:05:14 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:08:52.328 102400+0 records in 00:08:52.328 102400+0 records out 00:08:52.328 104857600 bytes (105 MB, 100 MiB) copied, 0.260689 s, 402 MB/s 00:08:52.328 21:05:14 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:08:52.328 21:05:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:08:52.586 aio_disk 00:08:52.586 21:05:15 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:08:52.586 21:05:15 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:52.586 21:05:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:52.586 e89b1fb2-7d38-4583-a7fc-09a3321dd820 00:08:52.586 21:05:15 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:08:52.586 21:05:15 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:08:52.586 21:05:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:08:52.845 21:05:15 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:08:52.845 21:05:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:08:53.104 21:05:15 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:53.104 21:05:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:53.362 21:05:15 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:53.362 21:05:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:53.621 21:05:16 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:08:53.621 21:05:16 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:08:53.621 21:05:16 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:18ecea1f-3bd3-467d-96c0-42925a386135 bdev_register:24556d93-02b7-4ce6-af6e-e4dfeae4502e bdev_register:1f841c24-28c6-4b78-832a-3ede0791898b bdev_register:92af689e-14ae-496b-a34f-ade5fc3cc8d9 00:08:53.621 21:05:16 -- json_config/json_config.sh@70 -- # local events_to_check 00:08:53.621 21:05:16 -- json_config/json_config.sh@71 -- # local recorded_events 00:08:53.621 21:05:16 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:08:53.621 21:05:16 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:18ecea1f-3bd3-467d-96c0-42925a386135 bdev_register:24556d93-02b7-4ce6-af6e-e4dfeae4502e bdev_register:1f841c24-28c6-4b78-832a-3ede0791898b bdev_register:92af689e-14ae-496b-a34f-ade5fc3cc8d9 00:08:53.621 21:05:16 -- json_config/json_config.sh@74 -- # sort 00:08:53.621 21:05:16 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:08:53.621 21:05:16 -- json_config/json_config.sh@75 -- # get_notifications 00:08:53.621 21:05:16 -- json_config/json_config.sh@75 -- # sort 00:08:53.621 21:05:16 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:53.621 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.621 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.621 21:05:16 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:53.621 21:05:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:53.621 21:05:16 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:18ecea1f-3bd3-467d-96c0-42925a386135 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:24556d93-02b7-4ce6-af6e-e4dfeae4502e 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:1f841c24-28c6-4b78-832a-3ede0791898b 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@65 -- # echo bdev_register:92af689e-14ae-496b-a34f-ade5fc3cc8d9 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # IFS=: 00:08:53.880 21:05:16 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:53.880 21:05:16 -- json_config/json_config.sh@77 -- # [[ bdev_register:18ecea1f-3bd3-467d-96c0-42925a386135 bdev_register:1f841c24-28c6-4b78-832a-3ede0791898b bdev_register:24556d93-02b7-4ce6-af6e-e4dfeae4502e bdev_register:92af689e-14ae-496b-a34f-ade5fc3cc8d9 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\8\e\c\e\a\1\f\-\3\b\d\3\-\4\6\7\d\-\9\6\c\0\-\4\2\9\2\5\a\3\8\6\1\3\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\f\8\4\1\c\2\4\-\2\8\c\6\-\4\b\7\8\-\8\3\2\a\-\3\e\d\e\0\7\9\1\8\9\8\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\4\5\5\6\d\9\3\-\0\2\b\7\-\4\c\e\6\-\a\f\6\e\-\e\4\d\f\e\a\e\4\5\0\2\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\2\a\f\6\8\9\e\-\1\4\a\e\-\4\9\6\b\-\a\3\4\f\-\a\d\e\5\f\c\3\c\c\8\d\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:08:53.880 21:05:16 -- json_config/json_config.sh@89 -- # cat 00:08:53.880 21:05:16 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:18ecea1f-3bd3-467d-96c0-42925a386135 bdev_register:1f841c24-28c6-4b78-832a-3ede0791898b bdev_register:24556d93-02b7-4ce6-af6e-e4dfeae4502e bdev_register:92af689e-14ae-496b-a34f-ade5fc3cc8d9 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:08:53.880 Expected events matched: 00:08:53.880 bdev_register:18ecea1f-3bd3-467d-96c0-42925a386135 00:08:53.880 bdev_register:1f841c24-28c6-4b78-832a-3ede0791898b 00:08:53.880 bdev_register:24556d93-02b7-4ce6-af6e-e4dfeae4502e 00:08:53.880 bdev_register:92af689e-14ae-496b-a34f-ade5fc3cc8d9 00:08:53.880 bdev_register:Malloc0 00:08:53.880 bdev_register:Malloc0p0 00:08:53.880 bdev_register:Malloc0p1 00:08:53.880 bdev_register:Malloc0p2 00:08:53.880 bdev_register:Malloc1 00:08:53.880 bdev_register:Malloc3 00:08:53.880 bdev_register:Null0 00:08:53.880 bdev_register:Nvme0n1 00:08:53.880 bdev_register:Nvme0n1p0 00:08:53.880 bdev_register:Nvme0n1p1 00:08:53.880 bdev_register:PTBdevFromMalloc3 00:08:53.880 bdev_register:aio_disk 00:08:53.880 21:05:16 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:08:53.880 21:05:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:53.880 21:05:16 -- common/autotest_common.sh@10 -- # set +x 00:08:53.880 21:05:16 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:08:53.880 21:05:16 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:08:53.880 21:05:16 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:08:53.880 21:05:16 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:08:53.880 21:05:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:53.880 21:05:16 -- common/autotest_common.sh@10 -- # set +x 00:08:53.880 21:05:16 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:08:53.880 21:05:16 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:53.880 21:05:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:54.139 MallocBdevForConfigChangeCheck 00:08:54.139 21:05:16 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:08:54.139 21:05:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:54.139 21:05:16 -- common/autotest_common.sh@10 -- # set +x 00:08:54.139 21:05:16 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:08:54.139 21:05:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:54.397 INFO: shutting down applications... 00:08:54.397 21:05:16 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:08:54.397 21:05:16 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:08:54.397 21:05:16 -- json_config/json_config.sh@431 -- # json_config_clear target 00:08:54.397 21:05:16 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:08:54.397 21:05:16 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:54.656 [2024-06-07 21:05:17.166371] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:08:54.656 Calling clear_vhost_scsi_subsystem 00:08:54.656 Calling clear_iscsi_subsystem 00:08:54.656 Calling clear_vhost_blk_subsystem 00:08:54.656 Calling clear_nbd_subsystem 00:08:54.656 Calling clear_nvmf_subsystem 00:08:54.656 Calling clear_bdev_subsystem 00:08:54.656 Calling clear_accel_subsystem 00:08:54.656 Calling clear_iobuf_subsystem 00:08:54.656 Calling clear_sock_subsystem 00:08:54.656 Calling clear_vmd_subsystem 00:08:54.656 Calling clear_scheduler_subsystem 00:08:54.656 21:05:17 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:08:54.656 21:05:17 -- json_config/json_config.sh@396 -- # count=100 00:08:54.656 21:05:17 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:08:54.656 21:05:17 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:54.656 21:05:17 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:54.656 21:05:17 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:08:55.237 21:05:17 -- json_config/json_config.sh@398 -- # break 00:08:55.237 21:05:17 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:08:55.237 21:05:17 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:08:55.237 21:05:17 -- json_config/json_config.sh@120 -- # local app=target 00:08:55.237 21:05:17 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:08:55.237 21:05:17 -- json_config/json_config.sh@124 -- # [[ -n 117295 ]] 00:08:55.237 21:05:17 -- json_config/json_config.sh@127 -- # kill -SIGINT 117295 00:08:55.237 21:05:17 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:08:55.237 21:05:17 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:55.237 21:05:17 -- json_config/json_config.sh@130 -- # kill -0 117295 00:08:55.237 21:05:17 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:55.496 SPDK target shutdown done 00:08:55.496 INFO: relaunching applications... 00:08:55.496 Waiting for target to run... 00:08:55.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:55.496 21:05:18 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:55.496 21:05:18 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:55.496 21:05:18 -- json_config/json_config.sh@130 -- # kill -0 117295 00:08:55.496 21:05:18 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:08:55.496 21:05:18 -- json_config/json_config.sh@132 -- # break 00:08:55.496 21:05:18 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:08:55.496 21:05:18 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:08:55.496 21:05:18 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:08:55.496 21:05:18 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:55.496 21:05:18 -- json_config/json_config.sh@98 -- # local app=target 00:08:55.496 21:05:18 -- json_config/json_config.sh@99 -- # shift 00:08:55.496 21:05:18 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:55.496 21:05:18 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:55.496 21:05:18 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:55.496 21:05:18 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:55.496 21:05:18 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:55.496 21:05:18 -- json_config/json_config.sh@111 -- # app_pid[$app]=117548 00:08:55.496 21:05:18 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:55.496 21:05:18 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:55.496 21:05:18 -- json_config/json_config.sh@114 -- # waitforlisten 117548 /var/tmp/spdk_tgt.sock 00:08:55.496 21:05:18 -- common/autotest_common.sh@819 -- # '[' -z 117548 ']' 00:08:55.496 21:05:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:55.496 21:05:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:55.496 21:05:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:55.496 21:05:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:55.496 21:05:18 -- common/autotest_common.sh@10 -- # set +x 00:08:55.755 [2024-06-07 21:05:18.216471] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:55.755 [2024-06-07 21:05:18.216717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117548 ] 00:08:56.014 [2024-06-07 21:05:18.660730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.273 [2024-06-07 21:05:18.719882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:56.273 [2024-06-07 21:05:18.720174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.273 [2024-06-07 21:05:18.869283] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:56.273 [2024-06-07 21:05:18.869462] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:56.273 [2024-06-07 21:05:18.877249] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:56.273 [2024-06-07 21:05:18.877392] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:56.273 [2024-06-07 21:05:18.885402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:56.273 [2024-06-07 21:05:18.885493] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:56.273 [2024-06-07 21:05:18.885528] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:56.531 [2024-06-07 21:05:18.970523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:56.531 [2024-06-07 21:05:18.970638] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.531 [2024-06-07 21:05:18.970674] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:56.531 [2024-06-07 21:05:18.970699] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.531 [2024-06-07 21:05:18.971210] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.531 [2024-06-07 21:05:18.971295] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:56.531 00:08:56.531 INFO: Checking if target configuration is the same... 00:08:56.531 21:05:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:56.531 21:05:19 -- common/autotest_common.sh@852 -- # return 0 00:08:56.531 21:05:19 -- json_config/json_config.sh@115 -- # echo '' 00:08:56.531 21:05:19 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:08:56.531 21:05:19 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:56.531 21:05:19 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:56.531 21:05:19 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:08:56.531 21:05:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:56.531 + '[' 2 -ne 2 ']' 00:08:56.531 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:56.531 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:56.531 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:56.531 +++ basename /dev/fd/62 00:08:56.531 ++ mktemp /tmp/62.XXX 00:08:56.531 + tmp_file_1=/tmp/62.Poi 00:08:56.531 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:56.531 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:56.531 + tmp_file_2=/tmp/spdk_tgt_config.json.2xu 00:08:56.531 + ret=0 00:08:56.531 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:56.790 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:57.048 + diff -u /tmp/62.Poi /tmp/spdk_tgt_config.json.2xu 00:08:57.048 INFO: JSON config files are the same 00:08:57.048 + echo 'INFO: JSON config files are the same' 00:08:57.048 + rm /tmp/62.Poi /tmp/spdk_tgt_config.json.2xu 00:08:57.048 + exit 0 00:08:57.048 INFO: changing configuration and checking if this can be detected... 00:08:57.048 21:05:19 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:08:57.048 21:05:19 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:57.048 21:05:19 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:57.048 21:05:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:57.307 21:05:19 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:08:57.307 21:05:19 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:57.307 21:05:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:57.307 + '[' 2 -ne 2 ']' 00:08:57.307 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:57.307 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:57.307 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:57.307 +++ basename /dev/fd/62 00:08:57.307 ++ mktemp /tmp/62.XXX 00:08:57.307 + tmp_file_1=/tmp/62.ohe 00:08:57.307 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:57.307 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:57.307 + tmp_file_2=/tmp/spdk_tgt_config.json.Lua 00:08:57.307 + ret=0 00:08:57.307 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:57.566 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:57.566 + diff -u /tmp/62.ohe /tmp/spdk_tgt_config.json.Lua 00:08:57.566 + ret=1 00:08:57.566 + echo '=== Start of file: /tmp/62.ohe ===' 00:08:57.566 + cat /tmp/62.ohe 00:08:57.566 + echo '=== End of file: /tmp/62.ohe ===' 00:08:57.566 + echo '' 00:08:57.566 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Lua ===' 00:08:57.566 + cat /tmp/spdk_tgt_config.json.Lua 00:08:57.566 + echo '=== End of file: /tmp/spdk_tgt_config.json.Lua ===' 00:08:57.566 + echo '' 00:08:57.566 + rm /tmp/62.ohe /tmp/spdk_tgt_config.json.Lua 00:08:57.566 + exit 1 00:08:57.566 INFO: configuration change detected. 00:08:57.566 21:05:20 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:08:57.566 21:05:20 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:08:57.567 21:05:20 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:08:57.567 21:05:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:57.567 21:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:57.567 21:05:20 -- json_config/json_config.sh@360 -- # local ret=0 00:08:57.567 21:05:20 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:08:57.567 21:05:20 -- json_config/json_config.sh@370 -- # [[ -n 117548 ]] 00:08:57.567 21:05:20 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:08:57.567 21:05:20 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:08:57.567 21:05:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:57.567 21:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:57.567 21:05:20 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:08:57.567 21:05:20 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:08:57.567 21:05:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:08:57.825 21:05:20 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:08:57.825 21:05:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:08:58.083 21:05:20 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:08:58.083 21:05:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:08:58.342 21:05:20 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:08:58.342 21:05:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:08:58.342 21:05:20 -- json_config/json_config.sh@246 -- # uname -s 00:08:58.342 21:05:20 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:08:58.342 21:05:20 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:08:58.342 21:05:20 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:08:58.342 21:05:20 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:08:58.342 21:05:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:58.342 21:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:58.342 21:05:21 -- json_config/json_config.sh@376 -- # killprocess 117548 00:08:58.342 21:05:21 -- common/autotest_common.sh@926 -- # '[' -z 117548 ']' 00:08:58.342 21:05:21 -- common/autotest_common.sh@930 -- # kill -0 117548 00:08:58.342 21:05:21 -- common/autotest_common.sh@931 -- # uname 00:08:58.342 21:05:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:58.342 21:05:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117548 00:08:58.601 killing process with pid 117548 00:08:58.601 21:05:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:58.601 21:05:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:58.601 21:05:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117548' 00:08:58.601 21:05:21 -- common/autotest_common.sh@945 -- # kill 117548 00:08:58.601 21:05:21 -- common/autotest_common.sh@950 -- # wait 117548 00:08:58.860 21:05:21 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:58.860 21:05:21 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:08:58.860 21:05:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:58.860 21:05:21 -- common/autotest_common.sh@10 -- # set +x 00:08:58.860 INFO: Success 00:08:58.860 21:05:21 -- json_config/json_config.sh@381 -- # return 0 00:08:58.860 21:05:21 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:08:58.860 00:08:58.860 real 0m10.358s 00:08:58.860 user 0m15.683s 00:08:58.860 sys 0m2.067s 00:08:58.860 21:05:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.860 21:05:21 -- common/autotest_common.sh@10 -- # set +x 00:08:58.860 ************************************ 00:08:58.860 END TEST json_config 00:08:58.860 ************************************ 00:08:58.860 21:05:21 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:58.860 21:05:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:58.860 21:05:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.860 21:05:21 -- common/autotest_common.sh@10 -- # set +x 00:08:58.860 ************************************ 00:08:58.860 START TEST json_config_extra_key 00:08:58.860 ************************************ 00:08:58.860 21:05:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.860 21:05:21 -- nvmf/common.sh@7 -- # uname -s 00:08:58.860 21:05:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.860 21:05:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.860 21:05:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.860 21:05:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.860 21:05:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.860 21:05:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.860 21:05:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.860 21:05:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.860 21:05:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.860 21:05:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.860 21:05:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4828519-6956-49cc-a62a-6dd3f795a49e 00:08:58.860 21:05:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=a4828519-6956-49cc-a62a-6dd3f795a49e 00:08:58.860 21:05:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.860 21:05:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.860 21:05:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:58.860 21:05:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.860 21:05:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.860 21:05:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.860 21:05:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.860 21:05:21 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:58.860 21:05:21 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:58.860 21:05:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:58.860 21:05:21 -- paths/export.sh@5 -- # export PATH 00:08:58.860 21:05:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:58.860 21:05:21 -- nvmf/common.sh@46 -- # : 0 00:08:58.860 21:05:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:58.860 21:05:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:58.860 21:05:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:58.860 21:05:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.860 21:05:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.860 21:05:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:58.860 21:05:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:58.860 21:05:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@16 -- # app_pid=([target]="") 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@17 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@18 -- # app_params=([target]='-m 0x1 -s 1024') 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@19 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:58.860 INFO: launching applications... 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@25 -- # shift 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=117723 00:08:58.860 Waiting for target to run... 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:58.860 21:05:21 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 117723 /var/tmp/spdk_tgt.sock 00:08:58.860 21:05:21 -- common/autotest_common.sh@819 -- # '[' -z 117723 ']' 00:08:58.860 21:05:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:58.860 21:05:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:58.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:58.860 21:05:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:58.860 21:05:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:58.860 21:05:21 -- common/autotest_common.sh@10 -- # set +x 00:08:59.119 [2024-06-07 21:05:21.547194] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:59.119 [2024-06-07 21:05:21.547364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117723 ] 00:08:59.378 [2024-06-07 21:05:21.987297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.378 [2024-06-07 21:05:22.042348] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:59.378 [2024-06-07 21:05:22.042614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.945 21:05:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:59.945 21:05:22 -- common/autotest_common.sh@852 -- # return 0 00:08:59.945 00:08:59.945 21:05:22 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:08:59.945 INFO: shutting down applications... 00:08:59.945 21:05:22 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:08:59.945 21:05:22 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:08:59.945 21:05:22 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:08:59.946 21:05:22 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:08:59.946 21:05:22 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 117723 ]] 00:08:59.946 21:05:22 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 117723 00:08:59.946 21:05:22 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:08:59.946 21:05:22 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:59.946 21:05:22 -- json_config/json_config_extra_key.sh@50 -- # kill -0 117723 00:08:59.946 21:05:22 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:00.520 21:05:22 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:00.520 21:05:22 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:00.520 21:05:22 -- json_config/json_config_extra_key.sh@50 -- # kill -0 117723 00:09:00.520 21:05:22 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:09:00.520 21:05:22 -- json_config/json_config_extra_key.sh@52 -- # break 00:09:00.520 SPDK target shutdown done 00:09:00.520 21:05:22 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:09:00.520 21:05:22 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:09:00.520 Success 00:09:00.520 21:05:22 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:09:00.520 00:09:00.520 real 0m1.524s 00:09:00.520 user 0m1.415s 00:09:00.520 sys 0m0.422s 00:09:00.520 21:05:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.520 21:05:22 -- common/autotest_common.sh@10 -- # set +x 00:09:00.520 ************************************ 00:09:00.520 END TEST json_config_extra_key 00:09:00.520 ************************************ 00:09:00.520 21:05:22 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:00.520 21:05:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:00.520 21:05:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:00.520 21:05:22 -- common/autotest_common.sh@10 -- # set +x 00:09:00.520 ************************************ 00:09:00.520 START TEST alias_rpc 00:09:00.520 ************************************ 00:09:00.520 21:05:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:00.520 * Looking for test storage... 00:09:00.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:00.520 21:05:23 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:00.520 21:05:23 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=117795 00:09:00.520 21:05:23 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 117795 00:09:00.520 21:05:23 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:00.520 21:05:23 -- common/autotest_common.sh@819 -- # '[' -z 117795 ']' 00:09:00.520 21:05:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.520 21:05:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:00.520 21:05:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.520 21:05:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:00.520 21:05:23 -- common/autotest_common.sh@10 -- # set +x 00:09:00.520 [2024-06-07 21:05:23.140252] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:00.520 [2024-06-07 21:05:23.140757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117795 ] 00:09:00.781 [2024-06-07 21:05:23.310054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.781 [2024-06-07 21:05:23.399566] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:00.781 [2024-06-07 21:05:23.399895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.715 21:05:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:01.715 21:05:24 -- common/autotest_common.sh@852 -- # return 0 00:09:01.715 21:05:24 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:01.715 21:05:24 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 117795 00:09:01.715 21:05:24 -- common/autotest_common.sh@926 -- # '[' -z 117795 ']' 00:09:01.715 21:05:24 -- common/autotest_common.sh@930 -- # kill -0 117795 00:09:01.715 21:05:24 -- common/autotest_common.sh@931 -- # uname 00:09:01.716 21:05:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:01.716 21:05:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117795 00:09:01.716 21:05:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:01.716 killing process with pid 117795 00:09:01.716 21:05:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:01.716 21:05:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117795' 00:09:01.716 21:05:24 -- common/autotest_common.sh@945 -- # kill 117795 00:09:01.716 21:05:24 -- common/autotest_common.sh@950 -- # wait 117795 00:09:02.283 00:09:02.283 real 0m1.760s 00:09:02.283 user 0m1.953s 00:09:02.283 sys 0m0.412s 00:09:02.283 21:05:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.283 21:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:02.283 ************************************ 00:09:02.283 END TEST alias_rpc 00:09:02.283 ************************************ 00:09:02.283 21:05:24 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:09:02.283 21:05:24 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:02.283 21:05:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:02.283 21:05:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:02.283 21:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:02.283 ************************************ 00:09:02.283 START TEST spdkcli_tcp 00:09:02.283 ************************************ 00:09:02.283 21:05:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:02.283 * Looking for test storage... 00:09:02.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:02.283 21:05:24 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:02.283 21:05:24 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:02.283 21:05:24 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:02.283 21:05:24 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:02.283 21:05:24 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:02.283 21:05:24 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:02.283 21:05:24 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:02.283 21:05:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:02.283 21:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:02.283 21:05:24 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=117882 00:09:02.283 21:05:24 -- spdkcli/tcp.sh@27 -- # waitforlisten 117882 00:09:02.283 21:05:24 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:02.283 21:05:24 -- common/autotest_common.sh@819 -- # '[' -z 117882 ']' 00:09:02.283 21:05:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.283 21:05:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:02.283 21:05:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.283 21:05:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:02.283 21:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:02.541 [2024-06-07 21:05:24.963021] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:02.541 [2024-06-07 21:05:24.963294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117882 ] 00:09:02.541 [2024-06-07 21:05:25.130927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:02.541 [2024-06-07 21:05:25.200343] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:02.541 [2024-06-07 21:05:25.200762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.541 [2024-06-07 21:05:25.200761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.476 21:05:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:03.476 21:05:25 -- common/autotest_common.sh@852 -- # return 0 00:09:03.476 21:05:25 -- spdkcli/tcp.sh@31 -- # socat_pid=117901 00:09:03.476 21:05:25 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:03.476 21:05:25 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:03.476 [ 00:09:03.476 "spdk_get_version", 00:09:03.476 "rpc_get_methods", 00:09:03.476 "trace_get_info", 00:09:03.476 "trace_get_tpoint_group_mask", 00:09:03.476 "trace_disable_tpoint_group", 00:09:03.476 "trace_enable_tpoint_group", 00:09:03.476 "trace_clear_tpoint_mask", 00:09:03.476 "trace_set_tpoint_mask", 00:09:03.476 "framework_get_pci_devices", 00:09:03.476 "framework_get_config", 00:09:03.476 "framework_get_subsystems", 00:09:03.476 "iobuf_get_stats", 00:09:03.476 "iobuf_set_options", 00:09:03.476 "sock_set_default_impl", 00:09:03.476 "sock_impl_set_options", 00:09:03.476 "sock_impl_get_options", 00:09:03.476 "vmd_rescan", 00:09:03.476 "vmd_remove_device", 00:09:03.476 "vmd_enable", 00:09:03.476 "accel_get_stats", 00:09:03.476 "accel_set_options", 00:09:03.476 "accel_set_driver", 00:09:03.476 "accel_crypto_key_destroy", 00:09:03.476 "accel_crypto_keys_get", 00:09:03.476 "accel_crypto_key_create", 00:09:03.476 "accel_assign_opc", 00:09:03.476 "accel_get_module_info", 00:09:03.476 "accel_get_opc_assignments", 00:09:03.476 "notify_get_notifications", 00:09:03.476 "notify_get_types", 00:09:03.476 "bdev_get_histogram", 00:09:03.476 "bdev_enable_histogram", 00:09:03.476 "bdev_set_qos_limit", 00:09:03.476 "bdev_set_qd_sampling_period", 00:09:03.476 "bdev_get_bdevs", 00:09:03.476 "bdev_reset_iostat", 00:09:03.476 "bdev_get_iostat", 00:09:03.476 "bdev_examine", 00:09:03.476 "bdev_wait_for_examine", 00:09:03.476 "bdev_set_options", 00:09:03.476 "scsi_get_devices", 00:09:03.476 "thread_set_cpumask", 00:09:03.476 "framework_get_scheduler", 00:09:03.476 "framework_set_scheduler", 00:09:03.476 "framework_get_reactors", 00:09:03.476 "thread_get_io_channels", 00:09:03.476 "thread_get_pollers", 00:09:03.476 "thread_get_stats", 00:09:03.476 "framework_monitor_context_switch", 00:09:03.476 "spdk_kill_instance", 00:09:03.476 "log_enable_timestamps", 00:09:03.476 "log_get_flags", 00:09:03.476 "log_clear_flag", 00:09:03.476 "log_set_flag", 00:09:03.476 "log_get_level", 00:09:03.476 "log_set_level", 00:09:03.476 "log_get_print_level", 00:09:03.476 "log_set_print_level", 00:09:03.476 "framework_enable_cpumask_locks", 00:09:03.476 "framework_disable_cpumask_locks", 00:09:03.476 "framework_wait_init", 00:09:03.476 "framework_start_init", 00:09:03.476 "virtio_blk_create_transport", 00:09:03.476 "virtio_blk_get_transports", 00:09:03.476 "vhost_controller_set_coalescing", 00:09:03.476 "vhost_get_controllers", 00:09:03.476 "vhost_delete_controller", 00:09:03.476 "vhost_create_blk_controller", 00:09:03.476 "vhost_scsi_controller_remove_target", 00:09:03.476 "vhost_scsi_controller_add_target", 00:09:03.476 "vhost_start_scsi_controller", 00:09:03.476 "vhost_create_scsi_controller", 00:09:03.476 "nbd_get_disks", 00:09:03.476 "nbd_stop_disk", 00:09:03.476 "nbd_start_disk", 00:09:03.476 "env_dpdk_get_mem_stats", 00:09:03.477 "nvmf_subsystem_get_listeners", 00:09:03.477 "nvmf_subsystem_get_qpairs", 00:09:03.477 "nvmf_subsystem_get_controllers", 00:09:03.477 "nvmf_get_stats", 00:09:03.477 "nvmf_get_transports", 00:09:03.477 "nvmf_create_transport", 00:09:03.477 "nvmf_get_targets", 00:09:03.477 "nvmf_delete_target", 00:09:03.477 "nvmf_create_target", 00:09:03.477 "nvmf_subsystem_allow_any_host", 00:09:03.477 "nvmf_subsystem_remove_host", 00:09:03.477 "nvmf_subsystem_add_host", 00:09:03.477 "nvmf_subsystem_remove_ns", 00:09:03.477 "nvmf_subsystem_add_ns", 00:09:03.477 "nvmf_subsystem_listener_set_ana_state", 00:09:03.477 "nvmf_discovery_get_referrals", 00:09:03.477 "nvmf_discovery_remove_referral", 00:09:03.477 "nvmf_discovery_add_referral", 00:09:03.477 "nvmf_subsystem_remove_listener", 00:09:03.477 "nvmf_subsystem_add_listener", 00:09:03.477 "nvmf_delete_subsystem", 00:09:03.477 "nvmf_create_subsystem", 00:09:03.477 "nvmf_get_subsystems", 00:09:03.477 "nvmf_set_crdt", 00:09:03.477 "nvmf_set_config", 00:09:03.477 "nvmf_set_max_subsystems", 00:09:03.477 "iscsi_set_options", 00:09:03.477 "iscsi_get_auth_groups", 00:09:03.477 "iscsi_auth_group_remove_secret", 00:09:03.477 "iscsi_auth_group_add_secret", 00:09:03.477 "iscsi_delete_auth_group", 00:09:03.477 "iscsi_create_auth_group", 00:09:03.477 "iscsi_set_discovery_auth", 00:09:03.477 "iscsi_get_options", 00:09:03.477 "iscsi_target_node_request_logout", 00:09:03.477 "iscsi_target_node_set_redirect", 00:09:03.477 "iscsi_target_node_set_auth", 00:09:03.477 "iscsi_target_node_add_lun", 00:09:03.477 "iscsi_get_connections", 00:09:03.477 "iscsi_portal_group_set_auth", 00:09:03.477 "iscsi_start_portal_group", 00:09:03.477 "iscsi_delete_portal_group", 00:09:03.477 "iscsi_create_portal_group", 00:09:03.477 "iscsi_get_portal_groups", 00:09:03.477 "iscsi_delete_target_node", 00:09:03.477 "iscsi_target_node_remove_pg_ig_maps", 00:09:03.477 "iscsi_target_node_add_pg_ig_maps", 00:09:03.477 "iscsi_create_target_node", 00:09:03.477 "iscsi_get_target_nodes", 00:09:03.477 "iscsi_delete_initiator_group", 00:09:03.477 "iscsi_initiator_group_remove_initiators", 00:09:03.477 "iscsi_initiator_group_add_initiators", 00:09:03.477 "iscsi_create_initiator_group", 00:09:03.477 "iscsi_get_initiator_groups", 00:09:03.477 "iaa_scan_accel_module", 00:09:03.477 "dsa_scan_accel_module", 00:09:03.477 "ioat_scan_accel_module", 00:09:03.477 "accel_error_inject_error", 00:09:03.477 "bdev_iscsi_delete", 00:09:03.477 "bdev_iscsi_create", 00:09:03.477 "bdev_iscsi_set_options", 00:09:03.477 "bdev_virtio_attach_controller", 00:09:03.477 "bdev_virtio_scsi_get_devices", 00:09:03.477 "bdev_virtio_detach_controller", 00:09:03.477 "bdev_virtio_blk_set_hotplug", 00:09:03.477 "bdev_ftl_set_property", 00:09:03.477 "bdev_ftl_get_properties", 00:09:03.477 "bdev_ftl_get_stats", 00:09:03.477 "bdev_ftl_unmap", 00:09:03.477 "bdev_ftl_unload", 00:09:03.477 "bdev_ftl_delete", 00:09:03.477 "bdev_ftl_load", 00:09:03.477 "bdev_ftl_create", 00:09:03.477 "bdev_aio_delete", 00:09:03.477 "bdev_aio_rescan", 00:09:03.477 "bdev_aio_create", 00:09:03.477 "blobfs_create", 00:09:03.477 "blobfs_detect", 00:09:03.477 "blobfs_set_cache_size", 00:09:03.477 "bdev_zone_block_delete", 00:09:03.477 "bdev_zone_block_create", 00:09:03.477 "bdev_delay_delete", 00:09:03.477 "bdev_delay_create", 00:09:03.477 "bdev_delay_update_latency", 00:09:03.477 "bdev_split_delete", 00:09:03.477 "bdev_split_create", 00:09:03.477 "bdev_error_inject_error", 00:09:03.477 "bdev_error_delete", 00:09:03.477 "bdev_error_create", 00:09:03.477 "bdev_raid_set_options", 00:09:03.477 "bdev_raid_remove_base_bdev", 00:09:03.477 "bdev_raid_add_base_bdev", 00:09:03.477 "bdev_raid_delete", 00:09:03.477 "bdev_raid_create", 00:09:03.477 "bdev_raid_get_bdevs", 00:09:03.477 "bdev_lvol_grow_lvstore", 00:09:03.477 "bdev_lvol_get_lvols", 00:09:03.477 "bdev_lvol_get_lvstores", 00:09:03.477 "bdev_lvol_delete", 00:09:03.477 "bdev_lvol_set_read_only", 00:09:03.477 "bdev_lvol_resize", 00:09:03.477 "bdev_lvol_decouple_parent", 00:09:03.477 "bdev_lvol_inflate", 00:09:03.477 "bdev_lvol_rename", 00:09:03.477 "bdev_lvol_clone_bdev", 00:09:03.477 "bdev_lvol_clone", 00:09:03.477 "bdev_lvol_snapshot", 00:09:03.477 "bdev_lvol_create", 00:09:03.477 "bdev_lvol_delete_lvstore", 00:09:03.477 "bdev_lvol_rename_lvstore", 00:09:03.477 "bdev_lvol_create_lvstore", 00:09:03.477 "bdev_passthru_delete", 00:09:03.477 "bdev_passthru_create", 00:09:03.477 "bdev_nvme_cuse_unregister", 00:09:03.477 "bdev_nvme_cuse_register", 00:09:03.477 "bdev_opal_new_user", 00:09:03.477 "bdev_opal_set_lock_state", 00:09:03.477 "bdev_opal_delete", 00:09:03.477 "bdev_opal_get_info", 00:09:03.477 "bdev_opal_create", 00:09:03.477 "bdev_nvme_opal_revert", 00:09:03.477 "bdev_nvme_opal_init", 00:09:03.477 "bdev_nvme_send_cmd", 00:09:03.477 "bdev_nvme_get_path_iostat", 00:09:03.477 "bdev_nvme_get_mdns_discovery_info", 00:09:03.477 "bdev_nvme_stop_mdns_discovery", 00:09:03.477 "bdev_nvme_start_mdns_discovery", 00:09:03.477 "bdev_nvme_set_multipath_policy", 00:09:03.477 "bdev_nvme_set_preferred_path", 00:09:03.477 "bdev_nvme_get_io_paths", 00:09:03.477 "bdev_nvme_remove_error_injection", 00:09:03.477 "bdev_nvme_add_error_injection", 00:09:03.477 "bdev_nvme_get_discovery_info", 00:09:03.477 "bdev_nvme_stop_discovery", 00:09:03.477 "bdev_nvme_start_discovery", 00:09:03.477 "bdev_nvme_get_controller_health_info", 00:09:03.477 "bdev_nvme_disable_controller", 00:09:03.477 "bdev_nvme_enable_controller", 00:09:03.477 "bdev_nvme_reset_controller", 00:09:03.477 "bdev_nvme_get_transport_statistics", 00:09:03.477 "bdev_nvme_apply_firmware", 00:09:03.477 "bdev_nvme_detach_controller", 00:09:03.477 "bdev_nvme_get_controllers", 00:09:03.477 "bdev_nvme_attach_controller", 00:09:03.477 "bdev_nvme_set_hotplug", 00:09:03.477 "bdev_nvme_set_options", 00:09:03.477 "bdev_null_resize", 00:09:03.477 "bdev_null_delete", 00:09:03.477 "bdev_null_create", 00:09:03.477 "bdev_malloc_delete", 00:09:03.477 "bdev_malloc_create" 00:09:03.477 ] 00:09:03.477 21:05:26 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:03.477 21:05:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:03.477 21:05:26 -- common/autotest_common.sh@10 -- # set +x 00:09:03.477 21:05:26 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:03.477 21:05:26 -- spdkcli/tcp.sh@38 -- # killprocess 117882 00:09:03.477 21:05:26 -- common/autotest_common.sh@926 -- # '[' -z 117882 ']' 00:09:03.477 21:05:26 -- common/autotest_common.sh@930 -- # kill -0 117882 00:09:03.477 21:05:26 -- common/autotest_common.sh@931 -- # uname 00:09:03.477 21:05:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:03.477 21:05:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117882 00:09:03.477 21:05:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:03.477 21:05:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:03.477 21:05:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117882' 00:09:03.477 killing process with pid 117882 00:09:03.477 21:05:26 -- common/autotest_common.sh@945 -- # kill 117882 00:09:03.477 21:05:26 -- common/autotest_common.sh@950 -- # wait 117882 00:09:04.062 00:09:04.062 real 0m1.759s 00:09:04.062 user 0m3.194s 00:09:04.062 sys 0m0.447s 00:09:04.062 21:05:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.062 ************************************ 00:09:04.062 END TEST spdkcli_tcp 00:09:04.062 ************************************ 00:09:04.062 21:05:26 -- common/autotest_common.sh@10 -- # set +x 00:09:04.062 21:05:26 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:04.062 21:05:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:04.062 21:05:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:04.062 21:05:26 -- common/autotest_common.sh@10 -- # set +x 00:09:04.062 ************************************ 00:09:04.062 START TEST dpdk_mem_utility 00:09:04.062 ************************************ 00:09:04.062 21:05:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:04.062 * Looking for test storage... 00:09:04.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:04.062 21:05:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:04.062 21:05:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=117975 00:09:04.062 21:05:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:04.062 21:05:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 117975 00:09:04.062 21:05:26 -- common/autotest_common.sh@819 -- # '[' -z 117975 ']' 00:09:04.062 21:05:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.062 21:05:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:04.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.062 21:05:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.062 21:05:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:04.062 21:05:26 -- common/autotest_common.sh@10 -- # set +x 00:09:04.326 [2024-06-07 21:05:26.764263] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:04.326 [2024-06-07 21:05:26.765218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117975 ] 00:09:04.326 [2024-06-07 21:05:26.927347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.586 [2024-06-07 21:05:27.019453] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:04.586 [2024-06-07 21:05:27.019773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.153 21:05:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:05.153 21:05:27 -- common/autotest_common.sh@852 -- # return 0 00:09:05.153 21:05:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:05.153 21:05:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:05.153 21:05:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:05.153 21:05:27 -- common/autotest_common.sh@10 -- # set +x 00:09:05.153 { 00:09:05.154 "filename": "/tmp/spdk_mem_dump.txt" 00:09:05.154 } 00:09:05.154 21:05:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:05.154 21:05:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:05.154 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:05.154 1 heaps totaling size 814.000000 MiB 00:09:05.154 size: 814.000000 MiB heap id: 0 00:09:05.154 end heaps---------- 00:09:05.154 8 mempools totaling size 598.116089 MiB 00:09:05.154 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:05.154 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:05.154 size: 84.521057 MiB name: bdev_io_117975 00:09:05.154 size: 51.011292 MiB name: evtpool_117975 00:09:05.154 size: 50.003479 MiB name: msgpool_117975 00:09:05.154 size: 21.763794 MiB name: PDU_Pool 00:09:05.154 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:05.154 size: 0.026123 MiB name: Session_Pool 00:09:05.154 end mempools------- 00:09:05.154 6 memzones totaling size 4.142822 MiB 00:09:05.154 size: 1.000366 MiB name: RG_ring_0_117975 00:09:05.154 size: 1.000366 MiB name: RG_ring_1_117975 00:09:05.154 size: 1.000366 MiB name: RG_ring_4_117975 00:09:05.154 size: 1.000366 MiB name: RG_ring_5_117975 00:09:05.154 size: 0.125366 MiB name: RG_ring_2_117975 00:09:05.154 size: 0.015991 MiB name: RG_ring_3_117975 00:09:05.154 end memzones------- 00:09:05.154 21:05:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:05.154 heap id: 0 total size: 814.000000 MiB number of busy elements: 221 number of free elements: 15 00:09:05.154 list of free elements. size: 12.486389 MiB 00:09:05.154 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:05.154 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:05.154 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:05.154 element at address: 0x200003e00000 with size: 0.996277 MiB 00:09:05.154 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:05.154 element at address: 0x200013800000 with size: 0.978699 MiB 00:09:05.154 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:05.154 element at address: 0x200019200000 with size: 0.936584 MiB 00:09:05.154 element at address: 0x200000200000 with size: 0.837219 MiB 00:09:05.154 element at address: 0x20001aa00000 with size: 0.568054 MiB 00:09:05.154 element at address: 0x20000b200000 with size: 0.489624 MiB 00:09:05.154 element at address: 0x200000800000 with size: 0.487061 MiB 00:09:05.154 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:05.154 element at address: 0x200027e00000 with size: 0.401978 MiB 00:09:05.154 element at address: 0x200003a00000 with size: 0.351685 MiB 00:09:05.154 list of standard malloc elements. size: 199.251038 MiB 00:09:05.154 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:05.154 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:05.154 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:05.154 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:05.154 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:05.154 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:05.154 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:05.154 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:05.154 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:09:05.154 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003adb300 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003adb500 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:05.154 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:09:05.154 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:09:05.154 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:05.155 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e66e80 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e66f40 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6db40 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:05.155 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:05.155 list of memzone associated elements. size: 602.262573 MiB 00:09:05.155 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:05.155 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:05.155 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:05.155 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:05.155 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:05.155 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_117975_0 00:09:05.155 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:05.155 associated memzone info: size: 48.002930 MiB name: MP_evtpool_117975_0 00:09:05.155 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:05.155 associated memzone info: size: 48.002930 MiB name: MP_msgpool_117975_0 00:09:05.155 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:05.155 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:05.155 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:05.155 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:05.155 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:05.155 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_117975 00:09:05.155 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:05.155 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_117975 00:09:05.155 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:05.155 associated memzone info: size: 1.007996 MiB name: MP_evtpool_117975 00:09:05.155 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:05.155 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:05.155 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:05.155 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:05.155 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:05.155 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:05.155 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:05.155 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:05.156 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:05.156 associated memzone info: size: 1.000366 MiB name: RG_ring_0_117975 00:09:05.156 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:05.156 associated memzone info: size: 1.000366 MiB name: RG_ring_1_117975 00:09:05.156 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:05.156 associated memzone info: size: 1.000366 MiB name: RG_ring_4_117975 00:09:05.156 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:05.156 associated memzone info: size: 1.000366 MiB name: RG_ring_5_117975 00:09:05.156 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:09:05.156 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_117975 00:09:05.156 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:09:05.156 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:05.156 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:05.156 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:05.156 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:05.156 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:05.156 element at address: 0x200003adf880 with size: 0.125488 MiB 00:09:05.156 associated memzone info: size: 0.125366 MiB name: RG_ring_2_117975 00:09:05.156 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:05.156 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:05.156 element at address: 0x200027e67000 with size: 0.023743 MiB 00:09:05.156 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:05.156 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:09:05.156 associated memzone info: size: 0.015991 MiB name: RG_ring_3_117975 00:09:05.156 element at address: 0x200027e6d140 with size: 0.002441 MiB 00:09:05.156 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:05.156 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:09:05.156 associated memzone info: size: 0.000183 MiB name: MP_msgpool_117975 00:09:05.156 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:09:05.156 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_117975 00:09:05.156 element at address: 0x200027e6dc00 with size: 0.000305 MiB 00:09:05.156 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:05.156 21:05:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:05.156 21:05:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 117975 00:09:05.156 21:05:27 -- common/autotest_common.sh@926 -- # '[' -z 117975 ']' 00:09:05.156 21:05:27 -- common/autotest_common.sh@930 -- # kill -0 117975 00:09:05.156 21:05:27 -- common/autotest_common.sh@931 -- # uname 00:09:05.156 21:05:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:05.156 21:05:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117975 00:09:05.414 21:05:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:05.414 21:05:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:05.414 killing process with pid 117975 00:09:05.414 21:05:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117975' 00:09:05.414 21:05:27 -- common/autotest_common.sh@945 -- # kill 117975 00:09:05.415 21:05:27 -- common/autotest_common.sh@950 -- # wait 117975 00:09:05.673 00:09:05.673 real 0m1.629s 00:09:05.673 user 0m1.712s 00:09:05.673 sys 0m0.428s 00:09:05.673 21:05:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.673 ************************************ 00:09:05.673 END TEST dpdk_mem_utility 00:09:05.673 ************************************ 00:09:05.673 21:05:28 -- common/autotest_common.sh@10 -- # set +x 00:09:05.673 21:05:28 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:05.673 21:05:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:05.673 21:05:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:05.673 21:05:28 -- common/autotest_common.sh@10 -- # set +x 00:09:05.673 ************************************ 00:09:05.673 START TEST event 00:09:05.673 ************************************ 00:09:05.673 21:05:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:05.931 * Looking for test storage... 00:09:05.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:05.931 21:05:28 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:05.931 21:05:28 -- bdev/nbd_common.sh@6 -- # set -e 00:09:05.931 21:05:28 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:05.931 21:05:28 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:05.931 21:05:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:05.931 21:05:28 -- common/autotest_common.sh@10 -- # set +x 00:09:05.931 ************************************ 00:09:05.931 START TEST event_perf 00:09:05.932 ************************************ 00:09:05.932 21:05:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:05.932 Running I/O for 1 seconds...[2024-06-07 21:05:28.417389] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:05.932 [2024-06-07 21:05:28.417618] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118056 ] 00:09:05.932 [2024-06-07 21:05:28.588493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.190 [2024-06-07 21:05:28.665447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.190 [2024-06-07 21:05:28.665573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.190 [2024-06-07 21:05:28.665671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.190 [2024-06-07 21:05:28.665676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.125 Running I/O for 1 seconds... 00:09:07.125 lcore 0: 161433 00:09:07.125 lcore 1: 161432 00:09:07.125 lcore 2: 161433 00:09:07.125 lcore 3: 161432 00:09:07.384 done. 00:09:07.384 00:09:07.384 real 0m1.425s 00:09:07.384 user 0m4.181s 00:09:07.384 sys 0m0.129s 00:09:07.384 21:05:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.384 21:05:29 -- common/autotest_common.sh@10 -- # set +x 00:09:07.384 ************************************ 00:09:07.384 END TEST event_perf 00:09:07.384 ************************************ 00:09:07.384 21:05:29 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:07.384 21:05:29 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:07.384 21:05:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:07.384 21:05:29 -- common/autotest_common.sh@10 -- # set +x 00:09:07.384 ************************************ 00:09:07.384 START TEST event_reactor 00:09:07.384 ************************************ 00:09:07.384 21:05:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:07.384 [2024-06-07 21:05:29.899125] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:07.384 [2024-06-07 21:05:29.899452] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118104 ] 00:09:07.643 [2024-06-07 21:05:30.065903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.643 [2024-06-07 21:05:30.184458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.018 test_start 00:09:09.018 oneshot 00:09:09.018 tick 100 00:09:09.018 tick 100 00:09:09.018 tick 250 00:09:09.018 tick 100 00:09:09.018 tick 100 00:09:09.018 tick 250 00:09:09.018 tick 500 00:09:09.018 tick 100 00:09:09.018 tick 100 00:09:09.018 tick 100 00:09:09.018 tick 250 00:09:09.018 tick 100 00:09:09.018 tick 100 00:09:09.018 test_end 00:09:09.018 00:09:09.018 real 0m1.465s 00:09:09.018 user 0m1.254s 00:09:09.018 sys 0m0.108s 00:09:09.018 21:05:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.018 21:05:31 -- common/autotest_common.sh@10 -- # set +x 00:09:09.018 ************************************ 00:09:09.018 END TEST event_reactor 00:09:09.018 ************************************ 00:09:09.018 21:05:31 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:09.018 21:05:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:09.018 21:05:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.018 21:05:31 -- common/autotest_common.sh@10 -- # set +x 00:09:09.018 ************************************ 00:09:09.018 START TEST event_reactor_perf 00:09:09.018 ************************************ 00:09:09.018 21:05:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:09.018 [2024-06-07 21:05:31.417355] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:09.019 [2024-06-07 21:05:31.417602] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118147 ] 00:09:09.019 [2024-06-07 21:05:31.580767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.277 [2024-06-07 21:05:31.709174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.246 test_start 00:09:10.246 test_end 00:09:10.246 Performance: 345674 events per second 00:09:10.246 00:09:10.246 real 0m1.462s 00:09:10.246 user 0m1.248s 00:09:10.246 sys 0m0.113s 00:09:10.246 21:05:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.246 21:05:32 -- common/autotest_common.sh@10 -- # set +x 00:09:10.246 ************************************ 00:09:10.246 END TEST event_reactor_perf 00:09:10.246 ************************************ 00:09:10.246 21:05:32 -- event/event.sh@49 -- # uname -s 00:09:10.246 21:05:32 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:10.246 21:05:32 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:10.246 21:05:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:10.246 21:05:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.246 21:05:32 -- common/autotest_common.sh@10 -- # set +x 00:09:10.246 ************************************ 00:09:10.246 START TEST event_scheduler 00:09:10.246 ************************************ 00:09:10.246 21:05:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:10.505 * Looking for test storage... 00:09:10.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:10.505 21:05:32 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:10.505 21:05:32 -- scheduler/scheduler.sh@35 -- # scheduler_pid=118232 00:09:10.505 21:05:32 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:10.505 21:05:32 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:10.505 21:05:32 -- scheduler/scheduler.sh@37 -- # waitforlisten 118232 00:09:10.505 21:05:32 -- common/autotest_common.sh@819 -- # '[' -z 118232 ']' 00:09:10.505 21:05:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.505 21:05:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:10.505 21:05:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.505 21:05:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:10.505 21:05:32 -- common/autotest_common.sh@10 -- # set +x 00:09:10.505 [2024-06-07 21:05:33.052381] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:10.505 [2024-06-07 21:05:33.052620] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118232 ] 00:09:10.764 [2024-06-07 21:05:33.246552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.764 [2024-06-07 21:05:33.313646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.764 [2024-06-07 21:05:33.313755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.764 [2024-06-07 21:05:33.313905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.764 [2024-06-07 21:05:33.313909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.699 21:05:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:11.699 21:05:34 -- common/autotest_common.sh@852 -- # return 0 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:11.699 21:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.699 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.699 POWER: Env isn't set yet! 00:09:11.699 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:11.699 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:11.699 POWER: Cannot set governor of lcore 0 to userspace 00:09:11.699 POWER: Attempting to initialise PSTAT power management... 00:09:11.699 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:11.699 POWER: Cannot set governor of lcore 0 to performance 00:09:11.699 POWER: Attempting to initialise AMD PSTATE power management... 00:09:11.699 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:11.699 POWER: Cannot set governor of lcore 0 to userspace 00:09:11.699 POWER: Attempting to initialise CPPC power management... 00:09:11.699 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:11.699 POWER: Cannot set governor of lcore 0 to userspace 00:09:11.699 POWER: Attempting to initialise VM power management... 00:09:11.699 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:11.699 POWER: Unable to set Power Management Environment for lcore 0 00:09:11.699 [2024-06-07 21:05:34.018828] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:09:11.699 [2024-06-07 21:05:34.019049] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:09:11.699 [2024-06-07 21:05:34.019181] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:09:11.699 21:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:11.699 21:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.699 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.699 [2024-06-07 21:05:34.103264] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:11.699 21:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:11.699 21:05:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:11.699 21:05:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:11.699 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.699 ************************************ 00:09:11.699 START TEST scheduler_create_thread 00:09:11.699 ************************************ 00:09:11.699 21:05:34 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:11.699 21:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.699 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.699 2 00:09:11.699 21:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:11.699 21:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.699 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.699 3 00:09:11.699 21:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:11.699 21:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.699 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.699 4 00:09:11.699 21:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:11.699 21:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.699 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.699 5 00:09:11.699 21:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:11.699 21:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.699 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.699 6 00:09:11.699 21:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:11.699 21:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.699 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.699 7 00:09:11.699 21:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:11.699 21:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.699 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.699 8 00:09:11.699 21:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:11.699 21:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.699 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.699 9 00:09:11.699 21:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:11.699 21:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.699 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.699 10 00:09:11.699 21:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:11.699 21:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.699 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.699 21:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:11.699 21:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.699 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:11.699 21:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.699 21:05:34 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:11.699 21:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.699 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:09:13.072 21:05:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.072 21:05:35 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:13.072 21:05:35 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:13.072 21:05:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.072 21:05:35 -- common/autotest_common.sh@10 -- # set +x 00:09:14.446 ************************************ 00:09:14.446 END TEST scheduler_create_thread 00:09:14.446 ************************************ 00:09:14.447 21:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:14.447 00:09:14.447 real 0m2.614s 00:09:14.447 user 0m0.015s 00:09:14.447 sys 0m0.002s 00:09:14.447 21:05:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.447 21:05:36 -- common/autotest_common.sh@10 -- # set +x 00:09:14.447 21:05:36 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:14.447 21:05:36 -- scheduler/scheduler.sh@46 -- # killprocess 118232 00:09:14.447 21:05:36 -- common/autotest_common.sh@926 -- # '[' -z 118232 ']' 00:09:14.447 21:05:36 -- common/autotest_common.sh@930 -- # kill -0 118232 00:09:14.447 21:05:36 -- common/autotest_common.sh@931 -- # uname 00:09:14.447 21:05:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:14.447 21:05:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118232 00:09:14.447 killing process with pid 118232 00:09:14.447 21:05:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:14.447 21:05:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:14.447 21:05:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118232' 00:09:14.447 21:05:36 -- common/autotest_common.sh@945 -- # kill 118232 00:09:14.447 21:05:36 -- common/autotest_common.sh@950 -- # wait 118232 00:09:14.705 [2024-06-07 21:05:37.212182] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:14.963 00:09:14.963 real 0m4.573s 00:09:14.963 user 0m8.549s 00:09:14.963 sys 0m0.382s 00:09:14.963 ************************************ 00:09:14.963 END TEST event_scheduler 00:09:14.963 21:05:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.963 21:05:37 -- common/autotest_common.sh@10 -- # set +x 00:09:14.963 ************************************ 00:09:14.963 21:05:37 -- event/event.sh@51 -- # modprobe -n nbd 00:09:14.963 21:05:37 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:14.963 21:05:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:14.963 21:05:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.963 21:05:37 -- common/autotest_common.sh@10 -- # set +x 00:09:14.963 ************************************ 00:09:14.963 START TEST app_repeat 00:09:14.963 ************************************ 00:09:14.963 21:05:37 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:09:14.963 21:05:37 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.963 21:05:37 -- event/event.sh@13 -- # nbd_list=("/dev/nbd0" "/dev/nbd1") 00:09:14.963 21:05:37 -- event/event.sh@13 -- # local nbd_list 00:09:14.963 21:05:37 -- event/event.sh@14 -- # bdev_list=("Malloc0" "Malloc1") 00:09:14.963 21:05:37 -- event/event.sh@14 -- # local bdev_list 00:09:14.963 21:05:37 -- event/event.sh@15 -- # local repeat_times=4 00:09:14.963 21:05:37 -- event/event.sh@17 -- # modprobe nbd 00:09:14.963 21:05:37 -- event/event.sh@19 -- # repeat_pid=118358 00:09:14.963 21:05:37 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:14.963 Process app_repeat pid: 118358 00:09:14.963 21:05:37 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 118358' 00:09:14.963 21:05:37 -- event/event.sh@23 -- # for i in {0..2} 00:09:14.963 21:05:37 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:14.963 spdk_app_start Round 0 00:09:14.963 21:05:37 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:14.963 21:05:37 -- event/event.sh@25 -- # waitforlisten 118358 /var/tmp/spdk-nbd.sock 00:09:14.963 21:05:37 -- common/autotest_common.sh@819 -- # '[' -z 118358 ']' 00:09:14.964 21:05:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:14.964 21:05:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:14.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:14.964 21:05:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:14.964 21:05:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:14.964 21:05:37 -- common/autotest_common.sh@10 -- # set +x 00:09:14.964 [2024-06-07 21:05:37.577218] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:14.964 [2024-06-07 21:05:37.577570] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118358 ] 00:09:15.221 [2024-06-07 21:05:37.750324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:15.221 [2024-06-07 21:05:37.880115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.221 [2024-06-07 21:05:37.880129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.155 21:05:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:16.155 21:05:38 -- common/autotest_common.sh@852 -- # return 0 00:09:16.155 21:05:38 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:16.414 Malloc0 00:09:16.414 21:05:38 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:16.414 Malloc1 00:09:16.672 21:05:39 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@12 -- # local i 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:16.672 /dev/nbd0 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:16.672 21:05:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:16.672 21:05:39 -- common/autotest_common.sh@857 -- # local i 00:09:16.672 21:05:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:16.672 21:05:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:16.672 21:05:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:16.672 21:05:39 -- common/autotest_common.sh@861 -- # break 00:09:16.672 21:05:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:16.672 21:05:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:16.672 21:05:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:16.672 1+0 records in 00:09:16.672 1+0 records out 00:09:16.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281898 s, 14.5 MB/s 00:09:16.672 21:05:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:16.672 21:05:39 -- common/autotest_common.sh@874 -- # size=4096 00:09:16.672 21:05:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:16.672 21:05:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:16.672 21:05:39 -- common/autotest_common.sh@877 -- # return 0 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:16.672 21:05:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:16.931 /dev/nbd1 00:09:16.931 21:05:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:17.189 21:05:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:17.189 21:05:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:17.189 21:05:39 -- common/autotest_common.sh@857 -- # local i 00:09:17.189 21:05:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:17.189 21:05:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:17.189 21:05:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:17.189 21:05:39 -- common/autotest_common.sh@861 -- # break 00:09:17.189 21:05:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:17.189 21:05:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:17.189 21:05:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:17.189 1+0 records in 00:09:17.189 1+0 records out 00:09:17.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354609 s, 11.6 MB/s 00:09:17.189 21:05:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:17.189 21:05:39 -- common/autotest_common.sh@874 -- # size=4096 00:09:17.189 21:05:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:17.189 21:05:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:17.189 21:05:39 -- common/autotest_common.sh@877 -- # return 0 00:09:17.189 21:05:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:17.189 21:05:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:17.189 21:05:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:17.189 21:05:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.189 21:05:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:17.189 21:05:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:17.189 { 00:09:17.189 "nbd_device": "/dev/nbd0", 00:09:17.189 "bdev_name": "Malloc0" 00:09:17.189 }, 00:09:17.189 { 00:09:17.189 "nbd_device": "/dev/nbd1", 00:09:17.189 "bdev_name": "Malloc1" 00:09:17.190 } 00:09:17.190 ]' 00:09:17.190 21:05:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:17.190 { 00:09:17.190 "nbd_device": "/dev/nbd0", 00:09:17.190 "bdev_name": "Malloc0" 00:09:17.190 }, 00:09:17.190 { 00:09:17.190 "nbd_device": "/dev/nbd1", 00:09:17.190 "bdev_name": "Malloc1" 00:09:17.190 } 00:09:17.190 ]' 00:09:17.190 21:05:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:17.448 /dev/nbd1' 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:17.448 /dev/nbd1' 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@65 -- # count=2 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@95 -- # count=2 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:17.448 256+0 records in 00:09:17.448 256+0 records out 00:09:17.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00788864 s, 133 MB/s 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:17.448 256+0 records in 00:09:17.448 256+0 records out 00:09:17.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027482 s, 38.2 MB/s 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:17.448 256+0 records in 00:09:17.448 256+0 records out 00:09:17.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292669 s, 35.8 MB/s 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:17.448 21:05:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:17.448 21:05:40 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:17.448 21:05:40 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:17.448 21:05:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.448 21:05:40 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:17.448 21:05:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:17.448 21:05:40 -- bdev/nbd_common.sh@51 -- # local i 00:09:17.448 21:05:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.448 21:05:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:17.707 21:05:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:17.707 21:05:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:17.707 21:05:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:17.707 21:05:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.707 21:05:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.707 21:05:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:17.707 21:05:40 -- bdev/nbd_common.sh@41 -- # break 00:09:17.707 21:05:40 -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.707 21:05:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.707 21:05:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:17.965 21:05:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:17.965 21:05:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:17.965 21:05:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:17.965 21:05:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.965 21:05:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.966 21:05:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:17.966 21:05:40 -- bdev/nbd_common.sh@41 -- # break 00:09:17.966 21:05:40 -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.966 21:05:40 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:17.966 21:05:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.966 21:05:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:18.224 21:05:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:18.224 21:05:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:18.224 21:05:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:18.224 21:05:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:18.225 21:05:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:18.225 21:05:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:18.225 21:05:40 -- bdev/nbd_common.sh@65 -- # true 00:09:18.225 21:05:40 -- bdev/nbd_common.sh@65 -- # count=0 00:09:18.225 21:05:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:18.225 21:05:40 -- bdev/nbd_common.sh@104 -- # count=0 00:09:18.225 21:05:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:18.225 21:05:40 -- bdev/nbd_common.sh@109 -- # return 0 00:09:18.225 21:05:40 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:18.792 21:05:41 -- event/event.sh@35 -- # sleep 3 00:09:19.051 [2024-06-07 21:05:41.483547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:19.051 [2024-06-07 21:05:41.590247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.051 [2024-06-07 21:05:41.590256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.051 [2024-06-07 21:05:41.662338] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:19.051 [2024-06-07 21:05:41.662832] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:21.581 spdk_app_start Round 1 00:09:21.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:21.581 21:05:44 -- event/event.sh@23 -- # for i in {0..2} 00:09:21.581 21:05:44 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:21.581 21:05:44 -- event/event.sh@25 -- # waitforlisten 118358 /var/tmp/spdk-nbd.sock 00:09:21.581 21:05:44 -- common/autotest_common.sh@819 -- # '[' -z 118358 ']' 00:09:21.581 21:05:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:21.581 21:05:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:21.581 21:05:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:21.581 21:05:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:21.582 21:05:44 -- common/autotest_common.sh@10 -- # set +x 00:09:21.839 21:05:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:21.839 21:05:44 -- common/autotest_common.sh@852 -- # return 0 00:09:21.839 21:05:44 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:22.097 Malloc0 00:09:22.097 21:05:44 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:22.356 Malloc1 00:09:22.356 21:05:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:22.356 21:05:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.356 21:05:44 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:22.356 21:05:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:22.356 21:05:44 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:22.356 21:05:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:22.356 21:05:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:22.356 21:05:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.356 21:05:44 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:22.356 21:05:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:22.356 21:05:44 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:22.356 21:05:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:22.356 21:05:44 -- bdev/nbd_common.sh@12 -- # local i 00:09:22.356 21:05:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:22.356 21:05:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:22.356 21:05:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:22.615 /dev/nbd0 00:09:22.615 21:05:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:22.615 21:05:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:22.615 21:05:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:22.615 21:05:45 -- common/autotest_common.sh@857 -- # local i 00:09:22.615 21:05:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:22.615 21:05:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:22.615 21:05:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:22.615 21:05:45 -- common/autotest_common.sh@861 -- # break 00:09:22.615 21:05:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:22.615 21:05:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:22.615 21:05:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:22.615 1+0 records in 00:09:22.615 1+0 records out 00:09:22.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055289 s, 7.4 MB/s 00:09:22.615 21:05:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:22.615 21:05:45 -- common/autotest_common.sh@874 -- # size=4096 00:09:22.615 21:05:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:22.615 21:05:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:22.615 21:05:45 -- common/autotest_common.sh@877 -- # return 0 00:09:22.615 21:05:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:22.615 21:05:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:22.615 21:05:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:22.876 /dev/nbd1 00:09:22.876 21:05:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:22.876 21:05:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:22.876 21:05:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:22.876 21:05:45 -- common/autotest_common.sh@857 -- # local i 00:09:22.876 21:05:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:22.876 21:05:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:22.876 21:05:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:22.876 21:05:45 -- common/autotest_common.sh@861 -- # break 00:09:22.876 21:05:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:22.876 21:05:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:22.876 21:05:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:22.876 1+0 records in 00:09:22.876 1+0 records out 00:09:22.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035016 s, 11.7 MB/s 00:09:22.876 21:05:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:22.876 21:05:45 -- common/autotest_common.sh@874 -- # size=4096 00:09:22.876 21:05:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:22.876 21:05:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:22.876 21:05:45 -- common/autotest_common.sh@877 -- # return 0 00:09:22.876 21:05:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:22.876 21:05:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:22.876 21:05:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:22.876 21:05:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.876 21:05:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:23.135 { 00:09:23.135 "nbd_device": "/dev/nbd0", 00:09:23.135 "bdev_name": "Malloc0" 00:09:23.135 }, 00:09:23.135 { 00:09:23.135 "nbd_device": "/dev/nbd1", 00:09:23.135 "bdev_name": "Malloc1" 00:09:23.135 } 00:09:23.135 ]' 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:23.135 { 00:09:23.135 "nbd_device": "/dev/nbd0", 00:09:23.135 "bdev_name": "Malloc0" 00:09:23.135 }, 00:09:23.135 { 00:09:23.135 "nbd_device": "/dev/nbd1", 00:09:23.135 "bdev_name": "Malloc1" 00:09:23.135 } 00:09:23.135 ]' 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:23.135 /dev/nbd1' 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:23.135 /dev/nbd1' 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@65 -- # count=2 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@95 -- # count=2 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:23.135 256+0 records in 00:09:23.135 256+0 records out 00:09:23.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00582908 s, 180 MB/s 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.135 21:05:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:23.394 256+0 records in 00:09:23.394 256+0 records out 00:09:23.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025552 s, 41.0 MB/s 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:23.394 256+0 records in 00:09:23.394 256+0 records out 00:09:23.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291791 s, 35.9 MB/s 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@51 -- # local i 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.394 21:05:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:23.652 21:05:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:23.652 21:05:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:23.652 21:05:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:23.652 21:05:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.652 21:05:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.652 21:05:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:23.652 21:05:46 -- bdev/nbd_common.sh@41 -- # break 00:09:23.652 21:05:46 -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.652 21:05:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.652 21:05:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:23.910 21:05:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:23.910 21:05:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:23.910 21:05:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:23.910 21:05:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.910 21:05:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.910 21:05:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:23.910 21:05:46 -- bdev/nbd_common.sh@41 -- # break 00:09:23.910 21:05:46 -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.910 21:05:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:23.910 21:05:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.910 21:05:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:24.170 21:05:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:24.170 21:05:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:24.170 21:05:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:24.170 21:05:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:24.170 21:05:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:24.170 21:05:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:24.170 21:05:46 -- bdev/nbd_common.sh@65 -- # true 00:09:24.170 21:05:46 -- bdev/nbd_common.sh@65 -- # count=0 00:09:24.170 21:05:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:24.170 21:05:46 -- bdev/nbd_common.sh@104 -- # count=0 00:09:24.170 21:05:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:24.170 21:05:46 -- bdev/nbd_common.sh@109 -- # return 0 00:09:24.170 21:05:46 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:24.429 21:05:47 -- event/event.sh@35 -- # sleep 3 00:09:24.687 [2024-06-07 21:05:47.238498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:24.687 [2024-06-07 21:05:47.303379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.687 [2024-06-07 21:05:47.303384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.687 [2024-06-07 21:05:47.359289] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:24.688 [2024-06-07 21:05:47.359689] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:27.972 spdk_app_start Round 2 00:09:27.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:27.972 21:05:50 -- event/event.sh@23 -- # for i in {0..2} 00:09:27.972 21:05:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:27.972 21:05:50 -- event/event.sh@25 -- # waitforlisten 118358 /var/tmp/spdk-nbd.sock 00:09:27.972 21:05:50 -- common/autotest_common.sh@819 -- # '[' -z 118358 ']' 00:09:27.972 21:05:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:27.972 21:05:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:27.972 21:05:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:27.972 21:05:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:27.972 21:05:50 -- common/autotest_common.sh@10 -- # set +x 00:09:27.972 21:05:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:27.972 21:05:50 -- common/autotest_common.sh@852 -- # return 0 00:09:27.972 21:05:50 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:27.972 Malloc0 00:09:27.972 21:05:50 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:28.230 Malloc1 00:09:28.231 21:05:50 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:28.231 21:05:50 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.231 21:05:50 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:28.231 21:05:50 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:28.231 21:05:50 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:28.231 21:05:50 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:28.231 21:05:50 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:28.231 21:05:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.231 21:05:50 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:28.231 21:05:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:28.231 21:05:50 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:28.231 21:05:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:28.231 21:05:50 -- bdev/nbd_common.sh@12 -- # local i 00:09:28.231 21:05:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:28.231 21:05:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:28.231 21:05:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:28.490 /dev/nbd0 00:09:28.490 21:05:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:28.490 21:05:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:28.490 21:05:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:28.490 21:05:51 -- common/autotest_common.sh@857 -- # local i 00:09:28.490 21:05:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:28.490 21:05:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:28.490 21:05:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:28.490 21:05:51 -- common/autotest_common.sh@861 -- # break 00:09:28.490 21:05:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:28.490 21:05:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:28.490 21:05:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:28.490 1+0 records in 00:09:28.490 1+0 records out 00:09:28.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574323 s, 7.1 MB/s 00:09:28.490 21:05:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:28.490 21:05:51 -- common/autotest_common.sh@874 -- # size=4096 00:09:28.490 21:05:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:28.490 21:05:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:28.490 21:05:51 -- common/autotest_common.sh@877 -- # return 0 00:09:28.490 21:05:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.490 21:05:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:28.490 21:05:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:28.749 /dev/nbd1 00:09:28.749 21:05:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:28.749 21:05:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:28.749 21:05:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:28.749 21:05:51 -- common/autotest_common.sh@857 -- # local i 00:09:28.749 21:05:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:28.749 21:05:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:28.749 21:05:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:28.749 21:05:51 -- common/autotest_common.sh@861 -- # break 00:09:28.749 21:05:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:28.749 21:05:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:28.749 21:05:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:28.749 1+0 records in 00:09:28.749 1+0 records out 00:09:28.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385881 s, 10.6 MB/s 00:09:28.749 21:05:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:28.749 21:05:51 -- common/autotest_common.sh@874 -- # size=4096 00:09:28.749 21:05:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:28.749 21:05:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:28.749 21:05:51 -- common/autotest_common.sh@877 -- # return 0 00:09:28.749 21:05:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.749 21:05:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:28.749 21:05:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:28.749 21:05:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.749 21:05:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:29.007 { 00:09:29.007 "nbd_device": "/dev/nbd0", 00:09:29.007 "bdev_name": "Malloc0" 00:09:29.007 }, 00:09:29.007 { 00:09:29.007 "nbd_device": "/dev/nbd1", 00:09:29.007 "bdev_name": "Malloc1" 00:09:29.007 } 00:09:29.007 ]' 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:29.007 { 00:09:29.007 "nbd_device": "/dev/nbd0", 00:09:29.007 "bdev_name": "Malloc0" 00:09:29.007 }, 00:09:29.007 { 00:09:29.007 "nbd_device": "/dev/nbd1", 00:09:29.007 "bdev_name": "Malloc1" 00:09:29.007 } 00:09:29.007 ]' 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:29.007 /dev/nbd1' 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:29.007 /dev/nbd1' 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@65 -- # count=2 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@95 -- # count=2 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:29.007 256+0 records in 00:09:29.007 256+0 records out 00:09:29.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0057295 s, 183 MB/s 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:29.007 21:05:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:29.266 256+0 records in 00:09:29.266 256+0 records out 00:09:29.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256897 s, 40.8 MB/s 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:29.266 256+0 records in 00:09:29.266 256+0 records out 00:09:29.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029304 s, 35.8 MB/s 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@51 -- # local i 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.266 21:05:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:29.526 21:05:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:29.526 21:05:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:29.526 21:05:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:29.526 21:05:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.526 21:05:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.526 21:05:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:29.526 21:05:51 -- bdev/nbd_common.sh@41 -- # break 00:09:29.526 21:05:51 -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.526 21:05:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.526 21:05:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:29.784 21:05:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:29.784 21:05:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:29.784 21:05:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:29.784 21:05:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.784 21:05:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.784 21:05:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:29.784 21:05:52 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:29.784 21:05:52 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:29.784 21:05:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.784 21:05:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:29.784 21:05:52 -- bdev/nbd_common.sh@41 -- # break 00:09:29.784 21:05:52 -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.784 21:05:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:29.784 21:05:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.784 21:05:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:30.043 21:05:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:30.043 21:05:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:30.043 21:05:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:30.043 21:05:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:30.043 21:05:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:30.043 21:05:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:30.043 21:05:52 -- bdev/nbd_common.sh@65 -- # true 00:09:30.043 21:05:52 -- bdev/nbd_common.sh@65 -- # count=0 00:09:30.043 21:05:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:30.043 21:05:52 -- bdev/nbd_common.sh@104 -- # count=0 00:09:30.043 21:05:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:30.043 21:05:52 -- bdev/nbd_common.sh@109 -- # return 0 00:09:30.043 21:05:52 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:30.302 21:05:52 -- event/event.sh@35 -- # sleep 3 00:09:30.569 [2024-06-07 21:05:53.127248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:30.569 [2024-06-07 21:05:53.202311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.569 [2024-06-07 21:05:53.202304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.834 [2024-06-07 21:05:53.256562] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:30.834 [2024-06-07 21:05:53.256684] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:33.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:33.367 21:05:55 -- event/event.sh@38 -- # waitforlisten 118358 /var/tmp/spdk-nbd.sock 00:09:33.367 21:05:55 -- common/autotest_common.sh@819 -- # '[' -z 118358 ']' 00:09:33.367 21:05:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:33.367 21:05:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:33.367 21:05:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:33.367 21:05:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:33.367 21:05:55 -- common/autotest_common.sh@10 -- # set +x 00:09:33.626 21:05:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:33.626 21:05:56 -- common/autotest_common.sh@852 -- # return 0 00:09:33.626 21:05:56 -- event/event.sh@39 -- # killprocess 118358 00:09:33.626 21:05:56 -- common/autotest_common.sh@926 -- # '[' -z 118358 ']' 00:09:33.626 21:05:56 -- common/autotest_common.sh@930 -- # kill -0 118358 00:09:33.626 21:05:56 -- common/autotest_common.sh@931 -- # uname 00:09:33.626 21:05:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:33.626 21:05:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118358 00:09:33.626 killing process with pid 118358 00:09:33.626 21:05:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:33.626 21:05:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:33.626 21:05:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118358' 00:09:33.626 21:05:56 -- common/autotest_common.sh@945 -- # kill 118358 00:09:33.626 21:05:56 -- common/autotest_common.sh@950 -- # wait 118358 00:09:33.885 spdk_app_start is called in Round 0. 00:09:33.885 Shutdown signal received, stop current app iteration 00:09:33.885 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:09:33.885 spdk_app_start is called in Round 1. 00:09:33.885 Shutdown signal received, stop current app iteration 00:09:33.885 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:09:33.885 spdk_app_start is called in Round 2. 00:09:33.885 Shutdown signal received, stop current app iteration 00:09:33.885 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:09:33.885 spdk_app_start is called in Round 3. 00:09:33.885 Shutdown signal received, stop current app iteration 00:09:33.885 ************************************ 00:09:33.885 END TEST app_repeat 00:09:33.885 ************************************ 00:09:33.885 21:05:56 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:33.885 21:05:56 -- event/event.sh@42 -- # return 0 00:09:33.885 00:09:33.885 real 0m18.896s 00:09:33.885 user 0m42.326s 00:09:33.885 sys 0m2.634s 00:09:33.885 21:05:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.885 21:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:33.885 21:05:56 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:33.885 21:05:56 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:33.885 21:05:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:33.885 21:05:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:33.885 21:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:33.885 ************************************ 00:09:33.885 START TEST cpu_locks 00:09:33.885 ************************************ 00:09:33.885 21:05:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:33.885 * Looking for test storage... 00:09:33.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:33.885 21:05:56 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:33.885 21:05:56 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:33.885 21:05:56 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:33.885 21:05:56 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:33.885 21:05:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:33.885 21:05:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:33.885 21:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:34.145 ************************************ 00:09:34.145 START TEST default_locks 00:09:34.145 ************************************ 00:09:34.145 21:05:56 -- common/autotest_common.sh@1104 -- # default_locks 00:09:34.145 21:05:56 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=118906 00:09:34.145 21:05:56 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:34.145 21:05:56 -- event/cpu_locks.sh@47 -- # waitforlisten 118906 00:09:34.145 21:05:56 -- common/autotest_common.sh@819 -- # '[' -z 118906 ']' 00:09:34.145 21:05:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.145 21:05:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:34.145 21:05:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.145 21:05:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:34.145 21:05:56 -- common/autotest_common.sh@10 -- # set +x 00:09:34.145 [2024-06-07 21:05:56.618838] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:34.145 [2024-06-07 21:05:56.619063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118906 ] 00:09:34.145 [2024-06-07 21:05:56.773807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.403 [2024-06-07 21:05:56.858543] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:34.403 [2024-06-07 21:05:56.858818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.970 21:05:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:34.970 21:05:57 -- common/autotest_common.sh@852 -- # return 0 00:09:34.970 21:05:57 -- event/cpu_locks.sh@49 -- # locks_exist 118906 00:09:34.970 21:05:57 -- event/cpu_locks.sh@22 -- # lslocks -p 118906 00:09:34.970 21:05:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:35.228 21:05:57 -- event/cpu_locks.sh@50 -- # killprocess 118906 00:09:35.228 21:05:57 -- common/autotest_common.sh@926 -- # '[' -z 118906 ']' 00:09:35.228 21:05:57 -- common/autotest_common.sh@930 -- # kill -0 118906 00:09:35.228 21:05:57 -- common/autotest_common.sh@931 -- # uname 00:09:35.228 21:05:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:35.228 21:05:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118906 00:09:35.228 killing process with pid 118906 00:09:35.228 21:05:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:35.228 21:05:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:35.228 21:05:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118906' 00:09:35.228 21:05:57 -- common/autotest_common.sh@945 -- # kill 118906 00:09:35.228 21:05:57 -- common/autotest_common.sh@950 -- # wait 118906 00:09:35.795 21:05:58 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 118906 00:09:35.795 21:05:58 -- common/autotest_common.sh@640 -- # local es=0 00:09:35.795 21:05:58 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 118906 00:09:35.795 21:05:58 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:35.795 21:05:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:35.795 21:05:58 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:35.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.795 ERROR: process (pid: 118906) is no longer running 00:09:35.795 ************************************ 00:09:35.795 END TEST default_locks 00:09:35.795 ************************************ 00:09:35.795 21:05:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:35.795 21:05:58 -- common/autotest_common.sh@643 -- # waitforlisten 118906 00:09:35.795 21:05:58 -- common/autotest_common.sh@819 -- # '[' -z 118906 ']' 00:09:35.795 21:05:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.795 21:05:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:35.795 21:05:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.795 21:05:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:35.795 21:05:58 -- common/autotest_common.sh@10 -- # set +x 00:09:35.795 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (118906) - No such process 00:09:35.795 21:05:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:35.795 21:05:58 -- common/autotest_common.sh@852 -- # return 1 00:09:35.795 21:05:58 -- common/autotest_common.sh@643 -- # es=1 00:09:35.795 21:05:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:35.795 21:05:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:35.795 21:05:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:35.795 21:05:58 -- event/cpu_locks.sh@54 -- # no_locks 00:09:35.795 21:05:58 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:09:35.795 21:05:58 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:35.795 21:05:58 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:35.795 00:09:35.795 real 0m1.721s 00:09:35.795 user 0m1.782s 00:09:35.795 sys 0m0.574s 00:09:35.795 21:05:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.795 21:05:58 -- common/autotest_common.sh@10 -- # set +x 00:09:35.795 21:05:58 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:35.795 21:05:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:35.795 21:05:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:35.795 21:05:58 -- common/autotest_common.sh@10 -- # set +x 00:09:35.795 ************************************ 00:09:35.795 START TEST default_locks_via_rpc 00:09:35.795 ************************************ 00:09:35.795 21:05:58 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:09:35.795 21:05:58 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=118967 00:09:35.795 21:05:58 -- event/cpu_locks.sh@63 -- # waitforlisten 118967 00:09:35.795 21:05:58 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:35.795 21:05:58 -- common/autotest_common.sh@819 -- # '[' -z 118967 ']' 00:09:35.795 21:05:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.795 21:05:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:35.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.796 21:05:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.796 21:05:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:35.796 21:05:58 -- common/autotest_common.sh@10 -- # set +x 00:09:35.796 [2024-06-07 21:05:58.398083] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:35.796 [2024-06-07 21:05:58.398319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118967 ] 00:09:36.055 [2024-06-07 21:05:58.556725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.055 [2024-06-07 21:05:58.627730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:36.055 [2024-06-07 21:05:58.627981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.992 21:05:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:36.992 21:05:59 -- common/autotest_common.sh@852 -- # return 0 00:09:36.992 21:05:59 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:36.992 21:05:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:36.992 21:05:59 -- common/autotest_common.sh@10 -- # set +x 00:09:36.992 21:05:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:36.992 21:05:59 -- event/cpu_locks.sh@67 -- # no_locks 00:09:36.992 21:05:59 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:09:36.992 21:05:59 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:36.992 21:05:59 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:36.992 21:05:59 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:36.992 21:05:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:36.992 21:05:59 -- common/autotest_common.sh@10 -- # set +x 00:09:36.992 21:05:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:36.992 21:05:59 -- event/cpu_locks.sh@71 -- # locks_exist 118967 00:09:36.992 21:05:59 -- event/cpu_locks.sh@22 -- # lslocks -p 118967 00:09:36.992 21:05:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:36.992 21:05:59 -- event/cpu_locks.sh@73 -- # killprocess 118967 00:09:36.992 21:05:59 -- common/autotest_common.sh@926 -- # '[' -z 118967 ']' 00:09:36.992 21:05:59 -- common/autotest_common.sh@930 -- # kill -0 118967 00:09:36.992 21:05:59 -- common/autotest_common.sh@931 -- # uname 00:09:36.992 21:05:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:36.992 21:05:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118967 00:09:36.992 21:05:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:36.992 21:05:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:36.992 killing process with pid 118967 00:09:36.992 21:05:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118967' 00:09:36.992 21:05:59 -- common/autotest_common.sh@945 -- # kill 118967 00:09:36.992 21:05:59 -- common/autotest_common.sh@950 -- # wait 118967 00:09:37.559 00:09:37.559 real 0m1.707s 00:09:37.559 user 0m1.867s 00:09:37.559 sys 0m0.479s 00:09:37.559 21:06:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.559 ************************************ 00:09:37.559 END TEST default_locks_via_rpc 00:09:37.559 ************************************ 00:09:37.559 21:06:00 -- common/autotest_common.sh@10 -- # set +x 00:09:37.559 21:06:00 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:37.559 21:06:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:37.559 21:06:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:37.559 21:06:00 -- common/autotest_common.sh@10 -- # set +x 00:09:37.559 ************************************ 00:09:37.559 START TEST non_locking_app_on_locked_coremask 00:09:37.559 ************************************ 00:09:37.559 21:06:00 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:09:37.559 21:06:00 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=119022 00:09:37.559 21:06:00 -- event/cpu_locks.sh@81 -- # waitforlisten 119022 /var/tmp/spdk.sock 00:09:37.559 21:06:00 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:37.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.559 21:06:00 -- common/autotest_common.sh@819 -- # '[' -z 119022 ']' 00:09:37.559 21:06:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.559 21:06:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:37.559 21:06:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.559 21:06:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:37.559 21:06:00 -- common/autotest_common.sh@10 -- # set +x 00:09:37.559 [2024-06-07 21:06:00.147073] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:37.559 [2024-06-07 21:06:00.147290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119022 ] 00:09:37.818 [2024-06-07 21:06:00.295344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.818 [2024-06-07 21:06:00.379149] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:37.818 [2024-06-07 21:06:00.379453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.753 21:06:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:38.753 21:06:01 -- common/autotest_common.sh@852 -- # return 0 00:09:38.753 21:06:01 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=119043 00:09:38.753 21:06:01 -- event/cpu_locks.sh@85 -- # waitforlisten 119043 /var/tmp/spdk2.sock 00:09:38.753 21:06:01 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:38.753 21:06:01 -- common/autotest_common.sh@819 -- # '[' -z 119043 ']' 00:09:38.753 21:06:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:38.753 21:06:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:38.753 21:06:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:38.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:38.753 21:06:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:38.753 21:06:01 -- common/autotest_common.sh@10 -- # set +x 00:09:38.753 [2024-06-07 21:06:01.207122] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:38.753 [2024-06-07 21:06:01.208486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119043 ] 00:09:38.753 [2024-06-07 21:06:01.386596] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:38.753 [2024-06-07 21:06:01.386683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.011 [2024-06-07 21:06:01.566962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:39.011 [2024-06-07 21:06:01.567249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.578 21:06:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:39.578 21:06:02 -- common/autotest_common.sh@852 -- # return 0 00:09:39.578 21:06:02 -- event/cpu_locks.sh@87 -- # locks_exist 119022 00:09:39.578 21:06:02 -- event/cpu_locks.sh@22 -- # lslocks -p 119022 00:09:39.578 21:06:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:39.836 21:06:02 -- event/cpu_locks.sh@89 -- # killprocess 119022 00:09:39.836 21:06:02 -- common/autotest_common.sh@926 -- # '[' -z 119022 ']' 00:09:39.836 21:06:02 -- common/autotest_common.sh@930 -- # kill -0 119022 00:09:39.836 21:06:02 -- common/autotest_common.sh@931 -- # uname 00:09:39.836 21:06:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:39.836 21:06:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119022 00:09:40.094 21:06:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:40.094 killing process with pid 119022 00:09:40.094 21:06:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:40.094 21:06:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119022' 00:09:40.094 21:06:02 -- common/autotest_common.sh@945 -- # kill 119022 00:09:40.094 21:06:02 -- common/autotest_common.sh@950 -- # wait 119022 00:09:41.027 21:06:03 -- event/cpu_locks.sh@90 -- # killprocess 119043 00:09:41.027 21:06:03 -- common/autotest_common.sh@926 -- # '[' -z 119043 ']' 00:09:41.027 21:06:03 -- common/autotest_common.sh@930 -- # kill -0 119043 00:09:41.027 21:06:03 -- common/autotest_common.sh@931 -- # uname 00:09:41.027 21:06:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:41.027 21:06:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119043 00:09:41.027 21:06:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:41.027 killing process with pid 119043 00:09:41.027 21:06:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:41.027 21:06:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119043' 00:09:41.027 21:06:03 -- common/autotest_common.sh@945 -- # kill 119043 00:09:41.027 21:06:03 -- common/autotest_common.sh@950 -- # wait 119043 00:09:41.286 00:09:41.286 real 0m3.711s 00:09:41.286 user 0m4.041s 00:09:41.286 sys 0m1.128s 00:09:41.286 ************************************ 00:09:41.286 END TEST non_locking_app_on_locked_coremask 00:09:41.286 ************************************ 00:09:41.286 21:06:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.286 21:06:03 -- common/autotest_common.sh@10 -- # set +x 00:09:41.286 21:06:03 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:41.286 21:06:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:41.286 21:06:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:41.286 21:06:03 -- common/autotest_common.sh@10 -- # set +x 00:09:41.286 ************************************ 00:09:41.286 START TEST locking_app_on_unlocked_coremask 00:09:41.286 ************************************ 00:09:41.286 21:06:03 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:09:41.286 21:06:03 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=119131 00:09:41.286 21:06:03 -- event/cpu_locks.sh@99 -- # waitforlisten 119131 /var/tmp/spdk.sock 00:09:41.286 21:06:03 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:41.286 21:06:03 -- common/autotest_common.sh@819 -- # '[' -z 119131 ']' 00:09:41.286 21:06:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.286 21:06:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:41.286 21:06:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.286 21:06:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:41.286 21:06:03 -- common/autotest_common.sh@10 -- # set +x 00:09:41.286 [2024-06-07 21:06:03.905476] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:41.286 [2024-06-07 21:06:03.905666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119131 ] 00:09:41.545 [2024-06-07 21:06:04.054342] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:41.545 [2024-06-07 21:06:04.054446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.545 [2024-06-07 21:06:04.137809] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:41.545 [2024-06-07 21:06:04.138154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.521 21:06:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:42.521 21:06:04 -- common/autotest_common.sh@852 -- # return 0 00:09:42.521 21:06:04 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=119152 00:09:42.521 21:06:04 -- event/cpu_locks.sh@103 -- # waitforlisten 119152 /var/tmp/spdk2.sock 00:09:42.521 21:06:04 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:42.521 21:06:04 -- common/autotest_common.sh@819 -- # '[' -z 119152 ']' 00:09:42.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:42.521 21:06:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:42.521 21:06:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:42.521 21:06:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:42.521 21:06:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:42.521 21:06:04 -- common/autotest_common.sh@10 -- # set +x 00:09:42.521 [2024-06-07 21:06:04.904278] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:42.521 [2024-06-07 21:06:04.904564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119152 ] 00:09:42.521 [2024-06-07 21:06:05.066230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.794 [2024-06-07 21:06:05.243932] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:42.794 [2024-06-07 21:06:05.244205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.360 21:06:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:43.360 21:06:05 -- common/autotest_common.sh@852 -- # return 0 00:09:43.360 21:06:05 -- event/cpu_locks.sh@105 -- # locks_exist 119152 00:09:43.360 21:06:05 -- event/cpu_locks.sh@22 -- # lslocks -p 119152 00:09:43.360 21:06:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:43.618 21:06:06 -- event/cpu_locks.sh@107 -- # killprocess 119131 00:09:43.618 21:06:06 -- common/autotest_common.sh@926 -- # '[' -z 119131 ']' 00:09:43.618 21:06:06 -- common/autotest_common.sh@930 -- # kill -0 119131 00:09:43.618 21:06:06 -- common/autotest_common.sh@931 -- # uname 00:09:43.618 21:06:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:43.618 21:06:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119131 00:09:43.618 21:06:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:43.618 killing process with pid 119131 00:09:43.618 21:06:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:43.618 21:06:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119131' 00:09:43.618 21:06:06 -- common/autotest_common.sh@945 -- # kill 119131 00:09:43.618 21:06:06 -- common/autotest_common.sh@950 -- # wait 119131 00:09:44.596 21:06:07 -- event/cpu_locks.sh@108 -- # killprocess 119152 00:09:44.596 21:06:07 -- common/autotest_common.sh@926 -- # '[' -z 119152 ']' 00:09:44.596 21:06:07 -- common/autotest_common.sh@930 -- # kill -0 119152 00:09:44.596 21:06:07 -- common/autotest_common.sh@931 -- # uname 00:09:44.596 21:06:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:44.596 21:06:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119152 00:09:44.596 21:06:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:44.596 21:06:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:44.596 21:06:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119152' 00:09:44.596 killing process with pid 119152 00:09:44.596 21:06:07 -- common/autotest_common.sh@945 -- # kill 119152 00:09:44.596 21:06:07 -- common/autotest_common.sh@950 -- # wait 119152 00:09:45.189 00:09:45.189 real 0m3.697s 00:09:45.189 user 0m4.009s 00:09:45.189 sys 0m1.069s 00:09:45.189 ************************************ 00:09:45.189 END TEST locking_app_on_unlocked_coremask 00:09:45.189 ************************************ 00:09:45.189 21:06:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.189 21:06:07 -- common/autotest_common.sh@10 -- # set +x 00:09:45.189 21:06:07 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:45.189 21:06:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:45.189 21:06:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:45.189 21:06:07 -- common/autotest_common.sh@10 -- # set +x 00:09:45.189 ************************************ 00:09:45.189 START TEST locking_app_on_locked_coremask 00:09:45.189 ************************************ 00:09:45.189 21:06:07 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:09:45.189 21:06:07 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=119221 00:09:45.189 21:06:07 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:45.189 21:06:07 -- event/cpu_locks.sh@116 -- # waitforlisten 119221 /var/tmp/spdk.sock 00:09:45.189 21:06:07 -- common/autotest_common.sh@819 -- # '[' -z 119221 ']' 00:09:45.189 21:06:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.189 21:06:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:45.189 21:06:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.189 21:06:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:45.189 21:06:07 -- common/autotest_common.sh@10 -- # set +x 00:09:45.189 [2024-06-07 21:06:07.647631] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:45.189 [2024-06-07 21:06:07.647877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119221 ] 00:09:45.189 [2024-06-07 21:06:07.809374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.447 [2024-06-07 21:06:07.900358] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:45.447 [2024-06-07 21:06:07.900655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.083 21:06:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:46.084 21:06:08 -- common/autotest_common.sh@852 -- # return 0 00:09:46.084 21:06:08 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=119242 00:09:46.084 21:06:08 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:46.084 21:06:08 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 119242 /var/tmp/spdk2.sock 00:09:46.084 21:06:08 -- common/autotest_common.sh@640 -- # local es=0 00:09:46.084 21:06:08 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 119242 /var/tmp/spdk2.sock 00:09:46.084 21:06:08 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:46.084 21:06:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:46.084 21:06:08 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:46.084 21:06:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:46.084 21:06:08 -- common/autotest_common.sh@643 -- # waitforlisten 119242 /var/tmp/spdk2.sock 00:09:46.084 21:06:08 -- common/autotest_common.sh@819 -- # '[' -z 119242 ']' 00:09:46.084 21:06:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:46.084 21:06:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:46.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:46.084 21:06:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:46.084 21:06:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:46.084 21:06:08 -- common/autotest_common.sh@10 -- # set +x 00:09:46.084 [2024-06-07 21:06:08.624608] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:46.084 [2024-06-07 21:06:08.624877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119242 ] 00:09:46.371 [2024-06-07 21:06:08.778994] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 119221 has claimed it. 00:09:46.371 [2024-06-07 21:06:08.779109] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:46.975 ERROR: process (pid: 119242) is no longer running 00:09:46.975 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (119242) - No such process 00:09:46.975 21:06:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:46.975 21:06:09 -- common/autotest_common.sh@852 -- # return 1 00:09:46.975 21:06:09 -- common/autotest_common.sh@643 -- # es=1 00:09:46.975 21:06:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:46.975 21:06:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:46.975 21:06:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:46.975 21:06:09 -- event/cpu_locks.sh@122 -- # locks_exist 119221 00:09:46.975 21:06:09 -- event/cpu_locks.sh@22 -- # lslocks -p 119221 00:09:46.975 21:06:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:46.975 21:06:09 -- event/cpu_locks.sh@124 -- # killprocess 119221 00:09:46.975 21:06:09 -- common/autotest_common.sh@926 -- # '[' -z 119221 ']' 00:09:46.975 21:06:09 -- common/autotest_common.sh@930 -- # kill -0 119221 00:09:46.975 21:06:09 -- common/autotest_common.sh@931 -- # uname 00:09:46.975 21:06:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:46.975 21:06:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119221 00:09:46.975 21:06:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:46.975 killing process with pid 119221 00:09:46.975 21:06:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:46.975 21:06:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119221' 00:09:46.975 21:06:09 -- common/autotest_common.sh@945 -- # kill 119221 00:09:46.975 21:06:09 -- common/autotest_common.sh@950 -- # wait 119221 00:09:47.561 00:09:47.561 real 0m2.419s 00:09:47.561 user 0m2.772s 00:09:47.561 sys 0m0.602s 00:09:47.561 21:06:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.561 ************************************ 00:09:47.561 END TEST locking_app_on_locked_coremask 00:09:47.561 ************************************ 00:09:47.561 21:06:10 -- common/autotest_common.sh@10 -- # set +x 00:09:47.561 21:06:10 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:47.561 21:06:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:47.561 21:06:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:47.561 21:06:10 -- common/autotest_common.sh@10 -- # set +x 00:09:47.561 ************************************ 00:09:47.561 START TEST locking_overlapped_coremask 00:09:47.561 ************************************ 00:09:47.561 21:06:10 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:09:47.561 21:06:10 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=119293 00:09:47.561 21:06:10 -- event/cpu_locks.sh@133 -- # waitforlisten 119293 /var/tmp/spdk.sock 00:09:47.561 21:06:10 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:47.561 21:06:10 -- common/autotest_common.sh@819 -- # '[' -z 119293 ']' 00:09:47.561 21:06:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.561 21:06:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:47.561 21:06:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.561 21:06:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:47.561 21:06:10 -- common/autotest_common.sh@10 -- # set +x 00:09:47.561 [2024-06-07 21:06:10.128087] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:47.561 [2024-06-07 21:06:10.128324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119293 ] 00:09:47.820 [2024-06-07 21:06:10.303669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.820 [2024-06-07 21:06:10.387468] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:47.820 [2024-06-07 21:06:10.387854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.820 [2024-06-07 21:06:10.388002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.820 [2024-06-07 21:06:10.387996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.386 21:06:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:48.386 21:06:11 -- common/autotest_common.sh@852 -- # return 0 00:09:48.386 21:06:11 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=119310 00:09:48.386 21:06:11 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 119310 /var/tmp/spdk2.sock 00:09:48.386 21:06:11 -- common/autotest_common.sh@640 -- # local es=0 00:09:48.386 21:06:11 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 119310 /var/tmp/spdk2.sock 00:09:48.386 21:06:11 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:48.387 21:06:11 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:48.387 21:06:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:48.387 21:06:11 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:48.387 21:06:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:48.387 21:06:11 -- common/autotest_common.sh@643 -- # waitforlisten 119310 /var/tmp/spdk2.sock 00:09:48.387 21:06:11 -- common/autotest_common.sh@819 -- # '[' -z 119310 ']' 00:09:48.387 21:06:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:48.387 21:06:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:48.387 21:06:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:48.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:48.387 21:06:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:48.387 21:06:11 -- common/autotest_common.sh@10 -- # set +x 00:09:48.645 [2024-06-07 21:06:11.112223] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:48.645 [2024-06-07 21:06:11.113140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119310 ] 00:09:48.645 [2024-06-07 21:06:11.319032] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 119293 has claimed it. 00:09:48.645 [2024-06-07 21:06:11.319168] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:49.213 ERROR: process (pid: 119310) is no longer running 00:09:49.213 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (119310) - No such process 00:09:49.213 21:06:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:49.213 21:06:11 -- common/autotest_common.sh@852 -- # return 1 00:09:49.213 21:06:11 -- common/autotest_common.sh@643 -- # es=1 00:09:49.214 21:06:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:49.214 21:06:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:49.214 21:06:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:49.214 21:06:11 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:49.214 21:06:11 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:49.214 21:06:11 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:49.214 21:06:11 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:49.214 21:06:11 -- event/cpu_locks.sh@141 -- # killprocess 119293 00:09:49.214 21:06:11 -- common/autotest_common.sh@926 -- # '[' -z 119293 ']' 00:09:49.214 21:06:11 -- common/autotest_common.sh@930 -- # kill -0 119293 00:09:49.214 21:06:11 -- common/autotest_common.sh@931 -- # uname 00:09:49.214 21:06:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:49.214 21:06:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119293 00:09:49.214 21:06:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:49.214 killing process with pid 119293 00:09:49.214 21:06:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:49.214 21:06:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119293' 00:09:49.214 21:06:11 -- common/autotest_common.sh@945 -- # kill 119293 00:09:49.214 21:06:11 -- common/autotest_common.sh@950 -- # wait 119293 00:09:49.781 00:09:49.781 real 0m2.204s 00:09:49.781 user 0m5.966s 00:09:49.781 sys 0m0.523s 00:09:49.781 21:06:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.781 ************************************ 00:09:49.781 END TEST locking_overlapped_coremask 00:09:49.781 ************************************ 00:09:49.781 21:06:12 -- common/autotest_common.sh@10 -- # set +x 00:09:49.781 21:06:12 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:49.781 21:06:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:49.781 21:06:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.781 21:06:12 -- common/autotest_common.sh@10 -- # set +x 00:09:49.781 ************************************ 00:09:49.781 START TEST locking_overlapped_coremask_via_rpc 00:09:49.781 ************************************ 00:09:49.781 21:06:12 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:09:49.781 21:06:12 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=119378 00:09:49.781 21:06:12 -- event/cpu_locks.sh@149 -- # waitforlisten 119378 /var/tmp/spdk.sock 00:09:49.781 21:06:12 -- common/autotest_common.sh@819 -- # '[' -z 119378 ']' 00:09:49.781 21:06:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.781 21:06:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:49.781 21:06:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.781 21:06:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:49.781 21:06:12 -- common/autotest_common.sh@10 -- # set +x 00:09:49.781 21:06:12 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:49.781 [2024-06-07 21:06:12.381409] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:49.781 [2024-06-07 21:06:12.382004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119378 ] 00:09:50.040 [2024-06-07 21:06:12.559728] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:50.040 [2024-06-07 21:06:12.559820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:50.040 [2024-06-07 21:06:12.638981] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:50.040 [2024-06-07 21:06:12.639405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.040 [2024-06-07 21:06:12.639521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.040 [2024-06-07 21:06:12.639516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.646 21:06:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:50.646 21:06:13 -- common/autotest_common.sh@852 -- # return 0 00:09:50.646 21:06:13 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=119397 00:09:50.646 21:06:13 -- event/cpu_locks.sh@153 -- # waitforlisten 119397 /var/tmp/spdk2.sock 00:09:50.646 21:06:13 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:50.646 21:06:13 -- common/autotest_common.sh@819 -- # '[' -z 119397 ']' 00:09:50.646 21:06:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:50.646 21:06:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:50.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:50.646 21:06:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:50.646 21:06:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:50.646 21:06:13 -- common/autotest_common.sh@10 -- # set +x 00:09:50.904 [2024-06-07 21:06:13.323061] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:50.904 [2024-06-07 21:06:13.323520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119397 ] 00:09:50.904 [2024-06-07 21:06:13.525247] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:50.904 [2024-06-07 21:06:13.525329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:51.162 [2024-06-07 21:06:13.648444] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:51.162 [2024-06-07 21:06:13.649048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.162 [2024-06-07 21:06:13.661132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.162 [2024-06-07 21:06:13.661136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:51.729 21:06:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:51.729 21:06:14 -- common/autotest_common.sh@852 -- # return 0 00:09:51.729 21:06:14 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:51.729 21:06:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:51.729 21:06:14 -- common/autotest_common.sh@10 -- # set +x 00:09:51.729 21:06:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:51.729 21:06:14 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:51.729 21:06:14 -- common/autotest_common.sh@640 -- # local es=0 00:09:51.729 21:06:14 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:51.729 21:06:14 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:09:51.729 21:06:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:51.729 21:06:14 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:09:51.729 21:06:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:51.729 21:06:14 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:51.730 21:06:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:51.730 21:06:14 -- common/autotest_common.sh@10 -- # set +x 00:09:51.730 [2024-06-07 21:06:14.289170] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 119378 has claimed it. 00:09:51.730 request: 00:09:51.730 { 00:09:51.730 "method": "framework_enable_cpumask_locks", 00:09:51.730 "req_id": 1 00:09:51.730 } 00:09:51.730 Got JSON-RPC error response 00:09:51.730 response: 00:09:51.730 { 00:09:51.730 "code": -32603, 00:09:51.730 "message": "Failed to claim CPU core: 2" 00:09:51.730 } 00:09:51.730 21:06:14 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:09:51.730 21:06:14 -- common/autotest_common.sh@643 -- # es=1 00:09:51.730 21:06:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:51.730 21:06:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:51.730 21:06:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:51.730 21:06:14 -- event/cpu_locks.sh@158 -- # waitforlisten 119378 /var/tmp/spdk.sock 00:09:51.730 21:06:14 -- common/autotest_common.sh@819 -- # '[' -z 119378 ']' 00:09:51.730 21:06:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.730 21:06:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:51.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.730 21:06:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.730 21:06:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:51.730 21:06:14 -- common/autotest_common.sh@10 -- # set +x 00:09:51.988 21:06:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:51.988 21:06:14 -- common/autotest_common.sh@852 -- # return 0 00:09:51.988 21:06:14 -- event/cpu_locks.sh@159 -- # waitforlisten 119397 /var/tmp/spdk2.sock 00:09:51.988 21:06:14 -- common/autotest_common.sh@819 -- # '[' -z 119397 ']' 00:09:51.988 21:06:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:51.988 21:06:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:51.988 21:06:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:51.988 21:06:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:51.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:51.988 21:06:14 -- common/autotest_common.sh@10 -- # set +x 00:09:52.248 21:06:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:52.248 21:06:14 -- common/autotest_common.sh@852 -- # return 0 00:09:52.248 21:06:14 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:52.248 21:06:14 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:52.248 21:06:14 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:52.248 21:06:14 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:52.248 00:09:52.248 real 0m2.435s 00:09:52.248 user 0m1.243s 00:09:52.248 sys 0m0.146s 00:09:52.248 21:06:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.248 21:06:14 -- common/autotest_common.sh@10 -- # set +x 00:09:52.248 ************************************ 00:09:52.248 END TEST locking_overlapped_coremask_via_rpc 00:09:52.248 ************************************ 00:09:52.248 21:06:14 -- event/cpu_locks.sh@174 -- # cleanup 00:09:52.248 21:06:14 -- event/cpu_locks.sh@15 -- # [[ -z 119378 ]] 00:09:52.248 21:06:14 -- event/cpu_locks.sh@15 -- # killprocess 119378 00:09:52.248 21:06:14 -- common/autotest_common.sh@926 -- # '[' -z 119378 ']' 00:09:52.248 21:06:14 -- common/autotest_common.sh@930 -- # kill -0 119378 00:09:52.248 21:06:14 -- common/autotest_common.sh@931 -- # uname 00:09:52.248 21:06:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:52.248 21:06:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119378 00:09:52.248 21:06:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:52.248 killing process with pid 119378 00:09:52.248 21:06:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:52.248 21:06:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119378' 00:09:52.248 21:06:14 -- common/autotest_common.sh@945 -- # kill 119378 00:09:52.248 21:06:14 -- common/autotest_common.sh@950 -- # wait 119378 00:09:52.816 21:06:15 -- event/cpu_locks.sh@16 -- # [[ -z 119397 ]] 00:09:52.816 21:06:15 -- event/cpu_locks.sh@16 -- # killprocess 119397 00:09:52.816 21:06:15 -- common/autotest_common.sh@926 -- # '[' -z 119397 ']' 00:09:52.816 21:06:15 -- common/autotest_common.sh@930 -- # kill -0 119397 00:09:52.816 21:06:15 -- common/autotest_common.sh@931 -- # uname 00:09:52.816 21:06:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:52.816 21:06:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119397 00:09:52.816 21:06:15 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:52.816 killing process with pid 119397 00:09:52.816 21:06:15 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:52.816 21:06:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119397' 00:09:52.816 21:06:15 -- common/autotest_common.sh@945 -- # kill 119397 00:09:52.816 21:06:15 -- common/autotest_common.sh@950 -- # wait 119397 00:09:53.075 21:06:15 -- event/cpu_locks.sh@18 -- # rm -f 00:09:53.075 21:06:15 -- event/cpu_locks.sh@1 -- # cleanup 00:09:53.075 Process with pid 119378 is not found 00:09:53.075 21:06:15 -- event/cpu_locks.sh@15 -- # [[ -z 119378 ]] 00:09:53.075 21:06:15 -- event/cpu_locks.sh@15 -- # killprocess 119378 00:09:53.075 21:06:15 -- common/autotest_common.sh@926 -- # '[' -z 119378 ']' 00:09:53.075 21:06:15 -- common/autotest_common.sh@930 -- # kill -0 119378 00:09:53.075 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (119378) - No such process 00:09:53.075 21:06:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 119378 is not found' 00:09:53.075 Process with pid 119397 is not found 00:09:53.075 21:06:15 -- event/cpu_locks.sh@16 -- # [[ -z 119397 ]] 00:09:53.075 21:06:15 -- event/cpu_locks.sh@16 -- # killprocess 119397 00:09:53.075 21:06:15 -- common/autotest_common.sh@926 -- # '[' -z 119397 ']' 00:09:53.075 21:06:15 -- common/autotest_common.sh@930 -- # kill -0 119397 00:09:53.075 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (119397) - No such process 00:09:53.075 21:06:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 119397 is not found' 00:09:53.075 21:06:15 -- event/cpu_locks.sh@18 -- # rm -f 00:09:53.075 00:09:53.075 real 0m19.229s 00:09:53.075 user 0m33.521s 00:09:53.075 sys 0m5.398s 00:09:53.075 21:06:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.075 21:06:15 -- common/autotest_common.sh@10 -- # set +x 00:09:53.075 ************************************ 00:09:53.075 END TEST cpu_locks 00:09:53.075 ************************************ 00:09:53.075 00:09:53.075 real 0m47.445s 00:09:53.075 user 1m31.272s 00:09:53.075 sys 0m8.946s 00:09:53.075 21:06:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.075 21:06:15 -- common/autotest_common.sh@10 -- # set +x 00:09:53.075 ************************************ 00:09:53.075 END TEST event 00:09:53.075 ************************************ 00:09:53.334 21:06:15 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:53.334 21:06:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:53.334 21:06:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:53.334 21:06:15 -- common/autotest_common.sh@10 -- # set +x 00:09:53.334 ************************************ 00:09:53.334 START TEST thread 00:09:53.334 ************************************ 00:09:53.334 21:06:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:53.334 * Looking for test storage... 00:09:53.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:53.334 21:06:15 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:53.334 21:06:15 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:53.334 21:06:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:53.334 21:06:15 -- common/autotest_common.sh@10 -- # set +x 00:09:53.334 ************************************ 00:09:53.334 START TEST thread_poller_perf 00:09:53.334 ************************************ 00:09:53.334 21:06:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:53.334 [2024-06-07 21:06:15.894463] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:53.334 [2024-06-07 21:06:15.894688] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119532 ] 00:09:53.593 [2024-06-07 21:06:16.053294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.593 [2024-06-07 21:06:16.112553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.593 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:54.971 ====================================== 00:09:54.971 busy:2212627492 (cyc) 00:09:54.971 total_run_count: 331000 00:09:54.971 tsc_hz: 2200000000 (cyc) 00:09:54.971 ====================================== 00:09:54.971 poller_cost: 6684 (cyc), 3038 (nsec) 00:09:54.971 00:09:54.971 real 0m1.349s 00:09:54.971 user 0m1.157s 00:09:54.971 sys 0m0.089s 00:09:54.971 ************************************ 00:09:54.971 END TEST thread_poller_perf 00:09:54.971 ************************************ 00:09:54.971 21:06:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.971 21:06:17 -- common/autotest_common.sh@10 -- # set +x 00:09:54.971 21:06:17 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:54.971 21:06:17 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:54.971 21:06:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:54.971 21:06:17 -- common/autotest_common.sh@10 -- # set +x 00:09:54.971 ************************************ 00:09:54.971 START TEST thread_poller_perf 00:09:54.971 ************************************ 00:09:54.971 21:06:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:54.971 [2024-06-07 21:06:17.303377] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:54.971 [2024-06-07 21:06:17.303631] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119570 ] 00:09:54.971 [2024-06-07 21:06:17.471871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.971 [2024-06-07 21:06:17.543901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.971 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:56.349 ====================================== 00:09:56.349 busy:2205205482 (cyc) 00:09:56.349 total_run_count: 4391000 00:09:56.349 tsc_hz: 2200000000 (cyc) 00:09:56.349 ====================================== 00:09:56.349 poller_cost: 502 (cyc), 228 (nsec) 00:09:56.349 00:09:56.350 real 0m1.374s 00:09:56.350 user 0m1.182s 00:09:56.350 sys 0m0.090s 00:09:56.350 ************************************ 00:09:56.350 END TEST thread_poller_perf 00:09:56.350 ************************************ 00:09:56.350 21:06:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.350 21:06:18 -- common/autotest_common.sh@10 -- # set +x 00:09:56.350 21:06:18 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:09:56.350 21:06:18 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:56.350 21:06:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:56.350 21:06:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:56.350 21:06:18 -- common/autotest_common.sh@10 -- # set +x 00:09:56.350 ************************************ 00:09:56.350 START TEST thread_spdk_lock 00:09:56.350 ************************************ 00:09:56.350 21:06:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:56.350 [2024-06-07 21:06:18.725793] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:56.350 [2024-06-07 21:06:18.726008] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119613 ] 00:09:56.350 [2024-06-07 21:06:18.882885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:56.350 [2024-06-07 21:06:18.956517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.350 [2024-06-07 21:06:18.956520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.918 [2024-06-07 21:06:19.483372] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:56.918 [2024-06-07 21:06:19.483477] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:56.918 [2024-06-07 21:06:19.483529] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x561b2a4430c0 00:09:56.918 [2024-06-07 21:06:19.484798] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:56.918 [2024-06-07 21:06:19.484922] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:56.918 [2024-06-07 21:06:19.484958] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:56.918 Starting test contend 00:09:56.918 Worker Delay Wait us Hold us Total us 00:09:56.918 0 3 132466 197615 330081 00:09:56.918 1 5 57299 300199 357499 00:09:56.918 PASS test contend 00:09:56.918 Starting test hold_by_poller 00:09:56.918 PASS test hold_by_poller 00:09:56.918 Starting test hold_by_message 00:09:56.918 PASS test hold_by_message 00:09:56.918 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:09:56.918 100014 assertions passed 00:09:56.918 0 assertions failed 00:09:56.918 00:09:56.918 real 0m0.878s 00:09:56.918 user 0m1.247s 00:09:56.918 sys 0m0.060s 00:09:56.918 ************************************ 00:09:56.918 21:06:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.918 21:06:19 -- common/autotest_common.sh@10 -- # set +x 00:09:56.918 END TEST thread_spdk_lock 00:09:56.918 ************************************ 00:09:57.176 00:09:57.176 real 0m3.828s 00:09:57.176 user 0m3.709s 00:09:57.176 sys 0m0.338s 00:09:57.176 21:06:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.176 21:06:19 -- common/autotest_common.sh@10 -- # set +x 00:09:57.176 ************************************ 00:09:57.176 END TEST thread 00:09:57.176 ************************************ 00:09:57.176 21:06:19 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:57.176 21:06:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:57.176 21:06:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:57.176 21:06:19 -- common/autotest_common.sh@10 -- # set +x 00:09:57.176 ************************************ 00:09:57.177 START TEST accel 00:09:57.177 ************************************ 00:09:57.177 21:06:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:57.177 * Looking for test storage... 00:09:57.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:57.177 21:06:19 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:09:57.177 21:06:19 -- accel/accel.sh@74 -- # get_expected_opcs 00:09:57.177 21:06:19 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:57.177 21:06:19 -- accel/accel.sh@59 -- # spdk_tgt_pid=119691 00:09:57.177 21:06:19 -- accel/accel.sh@60 -- # waitforlisten 119691 00:09:57.177 21:06:19 -- common/autotest_common.sh@819 -- # '[' -z 119691 ']' 00:09:57.177 21:06:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.177 21:06:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:57.177 21:06:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.177 21:06:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:57.177 21:06:19 -- common/autotest_common.sh@10 -- # set +x 00:09:57.177 21:06:19 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:57.177 21:06:19 -- accel/accel.sh@58 -- # build_accel_config 00:09:57.177 21:06:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:57.177 21:06:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:57.177 21:06:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:57.177 21:06:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:57.177 21:06:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:57.177 21:06:19 -- accel/accel.sh@41 -- # local IFS=, 00:09:57.177 21:06:19 -- accel/accel.sh@42 -- # jq -r . 00:09:57.177 [2024-06-07 21:06:19.815694] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:57.177 [2024-06-07 21:06:19.815926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119691 ] 00:09:57.438 [2024-06-07 21:06:19.983762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.438 [2024-06-07 21:06:20.062337] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:57.438 [2024-06-07 21:06:20.062626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.378 21:06:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:58.378 21:06:20 -- common/autotest_common.sh@852 -- # return 0 00:09:58.378 21:06:20 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:58.378 21:06:20 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:09:58.378 21:06:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:58.378 21:06:20 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:58.378 21:06:20 -- common/autotest_common.sh@10 -- # set +x 00:09:58.378 21:06:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:58.378 21:06:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # IFS== 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # read -r opc module 00:09:58.378 21:06:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:58.378 21:06:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # IFS== 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # read -r opc module 00:09:58.378 21:06:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:58.378 21:06:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # IFS== 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # read -r opc module 00:09:58.378 21:06:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:58.378 21:06:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # IFS== 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # read -r opc module 00:09:58.378 21:06:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:58.378 21:06:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # IFS== 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # read -r opc module 00:09:58.378 21:06:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:58.378 21:06:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # IFS== 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # read -r opc module 00:09:58.378 21:06:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:58.378 21:06:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # IFS== 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # read -r opc module 00:09:58.378 21:06:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:58.378 21:06:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # IFS== 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # read -r opc module 00:09:58.378 21:06:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:58.378 21:06:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # IFS== 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # read -r opc module 00:09:58.378 21:06:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:58.378 21:06:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # IFS== 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # read -r opc module 00:09:58.378 21:06:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:58.378 21:06:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # IFS== 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # read -r opc module 00:09:58.378 21:06:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:58.378 21:06:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # IFS== 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # read -r opc module 00:09:58.378 21:06:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:58.378 21:06:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # IFS== 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # read -r opc module 00:09:58.378 21:06:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:58.378 21:06:20 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # IFS== 00:09:58.378 21:06:20 -- accel/accel.sh@64 -- # read -r opc module 00:09:58.378 21:06:20 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:58.379 21:06:20 -- accel/accel.sh@67 -- # killprocess 119691 00:09:58.379 21:06:20 -- common/autotest_common.sh@926 -- # '[' -z 119691 ']' 00:09:58.379 21:06:20 -- common/autotest_common.sh@930 -- # kill -0 119691 00:09:58.379 21:06:20 -- common/autotest_common.sh@931 -- # uname 00:09:58.379 21:06:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:58.379 21:06:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119691 00:09:58.379 21:06:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:58.379 killing process with pid 119691 00:09:58.379 21:06:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:58.379 21:06:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119691' 00:09:58.379 21:06:20 -- common/autotest_common.sh@945 -- # kill 119691 00:09:58.379 21:06:20 -- common/autotest_common.sh@950 -- # wait 119691 00:09:58.946 21:06:21 -- accel/accel.sh@68 -- # trap - ERR 00:09:58.946 21:06:21 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:09:58.946 21:06:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:58.946 21:06:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:58.946 21:06:21 -- common/autotest_common.sh@10 -- # set +x 00:09:58.946 21:06:21 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:09:58.946 21:06:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:58.946 21:06:21 -- accel/accel.sh@12 -- # build_accel_config 00:09:58.946 21:06:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:58.946 21:06:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:58.946 21:06:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:58.946 21:06:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:58.946 21:06:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:58.946 21:06:21 -- accel/accel.sh@41 -- # local IFS=, 00:09:58.946 21:06:21 -- accel/accel.sh@42 -- # jq -r . 00:09:58.946 21:06:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.946 21:06:21 -- common/autotest_common.sh@10 -- # set +x 00:09:58.946 21:06:21 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:58.946 21:06:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:58.946 21:06:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:58.946 21:06:21 -- common/autotest_common.sh@10 -- # set +x 00:09:58.946 ************************************ 00:09:58.946 START TEST accel_missing_filename 00:09:58.946 ************************************ 00:09:58.946 21:06:21 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:09:58.946 21:06:21 -- common/autotest_common.sh@640 -- # local es=0 00:09:58.946 21:06:21 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:58.946 21:06:21 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:58.946 21:06:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:58.946 21:06:21 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:58.946 21:06:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:58.946 21:06:21 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:09:58.946 21:06:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:58.946 21:06:21 -- accel/accel.sh@12 -- # build_accel_config 00:09:58.946 21:06:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:58.946 21:06:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:58.946 21:06:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:58.946 21:06:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:58.946 21:06:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:58.946 21:06:21 -- accel/accel.sh@41 -- # local IFS=, 00:09:58.946 21:06:21 -- accel/accel.sh@42 -- # jq -r . 00:09:58.946 [2024-06-07 21:06:21.480847] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:58.946 [2024-06-07 21:06:21.481743] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119763 ] 00:09:59.205 [2024-06-07 21:06:21.648617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.205 [2024-06-07 21:06:21.725541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.205 [2024-06-07 21:06:21.781655] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.205 [2024-06-07 21:06:21.865129] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:59.463 A filename is required. 00:09:59.463 21:06:21 -- common/autotest_common.sh@643 -- # es=234 00:09:59.464 21:06:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:59.464 21:06:21 -- common/autotest_common.sh@652 -- # es=106 00:09:59.464 21:06:21 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:59.464 21:06:21 -- common/autotest_common.sh@660 -- # es=1 00:09:59.464 21:06:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:59.464 00:09:59.464 real 0m0.515s 00:09:59.464 user 0m0.307s 00:09:59.464 sys 0m0.162s 00:09:59.464 21:06:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.464 21:06:21 -- common/autotest_common.sh@10 -- # set +x 00:09:59.464 ************************************ 00:09:59.464 END TEST accel_missing_filename 00:09:59.464 ************************************ 00:09:59.464 21:06:22 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:59.464 21:06:22 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:09:59.464 21:06:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:59.464 21:06:22 -- common/autotest_common.sh@10 -- # set +x 00:09:59.464 ************************************ 00:09:59.464 START TEST accel_compress_verify 00:09:59.464 ************************************ 00:09:59.464 21:06:22 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:59.464 21:06:22 -- common/autotest_common.sh@640 -- # local es=0 00:09:59.464 21:06:22 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:59.464 21:06:22 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:59.464 21:06:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:59.464 21:06:22 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:59.464 21:06:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:59.464 21:06:22 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:59.464 21:06:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:59.464 21:06:22 -- accel/accel.sh@12 -- # build_accel_config 00:09:59.464 21:06:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:59.464 21:06:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.464 21:06:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.464 21:06:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:59.464 21:06:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:59.464 21:06:22 -- accel/accel.sh@41 -- # local IFS=, 00:09:59.464 21:06:22 -- accel/accel.sh@42 -- # jq -r . 00:09:59.464 [2024-06-07 21:06:22.046995] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:59.464 [2024-06-07 21:06:22.047270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119794 ] 00:09:59.723 [2024-06-07 21:06:22.218747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.723 [2024-06-07 21:06:22.298687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.723 [2024-06-07 21:06:22.358558] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.982 [2024-06-07 21:06:22.443520] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:59.982 00:09:59.982 Compression does not support the verify option, aborting. 00:09:59.982 21:06:22 -- common/autotest_common.sh@643 -- # es=161 00:09:59.982 21:06:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:59.982 21:06:22 -- common/autotest_common.sh@652 -- # es=33 00:09:59.982 21:06:22 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:59.982 21:06:22 -- common/autotest_common.sh@660 -- # es=1 00:09:59.982 21:06:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:59.982 00:09:59.982 real 0m0.531s 00:09:59.982 user 0m0.311s 00:09:59.982 sys 0m0.177s 00:09:59.982 21:06:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.982 21:06:22 -- common/autotest_common.sh@10 -- # set +x 00:09:59.982 ************************************ 00:09:59.982 END TEST accel_compress_verify 00:09:59.982 ************************************ 00:09:59.982 21:06:22 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:59.982 21:06:22 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:59.982 21:06:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:59.982 21:06:22 -- common/autotest_common.sh@10 -- # set +x 00:09:59.982 ************************************ 00:09:59.982 START TEST accel_wrong_workload 00:09:59.982 ************************************ 00:09:59.982 21:06:22 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:09:59.982 21:06:22 -- common/autotest_common.sh@640 -- # local es=0 00:09:59.982 21:06:22 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:59.982 21:06:22 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:59.982 21:06:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:59.982 21:06:22 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:59.982 21:06:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:59.982 21:06:22 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:09:59.982 21:06:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:59.982 21:06:22 -- accel/accel.sh@12 -- # build_accel_config 00:09:59.982 21:06:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:59.982 21:06:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.982 21:06:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.982 21:06:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:59.982 21:06:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:59.982 21:06:22 -- accel/accel.sh@41 -- # local IFS=, 00:09:59.982 21:06:22 -- accel/accel.sh@42 -- # jq -r . 00:09:59.982 Unsupported workload type: foobar 00:09:59.982 [2024-06-07 21:06:22.620500] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:59.982 accel_perf options: 00:09:59.982 [-h help message] 00:09:59.982 [-q queue depth per core] 00:09:59.982 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:59.982 [-T number of threads per core 00:09:59.982 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:59.982 [-t time in seconds] 00:09:59.982 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:59.982 [ dif_verify, , dif_generate, dif_generate_copy 00:09:59.982 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:59.982 [-l for compress/decompress workloads, name of uncompressed input file 00:09:59.982 [-S for crc32c workload, use this seed value (default 0) 00:09:59.982 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:59.982 [-f for fill workload, use this BYTE value (default 255) 00:09:59.982 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:59.982 [-y verify result if this switch is on] 00:09:59.982 [-a tasks to allocate per core (default: same value as -q)] 00:09:59.982 Can be used to spread operations across a wider range of memory. 00:09:59.982 21:06:22 -- common/autotest_common.sh@643 -- # es=1 00:09:59.982 21:06:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:59.982 21:06:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:59.982 21:06:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:59.982 00:09:59.982 real 0m0.053s 00:09:59.982 user 0m0.028s 00:09:59.982 sys 0m0.026s 00:09:59.982 21:06:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.982 ************************************ 00:09:59.982 END TEST accel_wrong_workload 00:09:59.982 ************************************ 00:09:59.982 21:06:22 -- common/autotest_common.sh@10 -- # set +x 00:10:00.241 21:06:22 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:00.242 21:06:22 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:00.242 21:06:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:00.242 21:06:22 -- common/autotest_common.sh@10 -- # set +x 00:10:00.242 ************************************ 00:10:00.242 START TEST accel_negative_buffers 00:10:00.242 ************************************ 00:10:00.242 21:06:22 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:00.242 21:06:22 -- common/autotest_common.sh@640 -- # local es=0 00:10:00.242 21:06:22 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:00.242 21:06:22 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:00.242 21:06:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:00.242 21:06:22 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:00.242 21:06:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:00.242 21:06:22 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:10:00.242 21:06:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:00.242 21:06:22 -- accel/accel.sh@12 -- # build_accel_config 00:10:00.242 21:06:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:00.242 21:06:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:00.242 21:06:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:00.242 21:06:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:00.242 21:06:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:00.242 21:06:22 -- accel/accel.sh@41 -- # local IFS=, 00:10:00.242 21:06:22 -- accel/accel.sh@42 -- # jq -r . 00:10:00.242 -x option must be non-negative. 00:10:00.242 [2024-06-07 21:06:22.722684] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:00.242 accel_perf options: 00:10:00.242 [-h help message] 00:10:00.242 [-q queue depth per core] 00:10:00.242 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:00.242 [-T number of threads per core 00:10:00.242 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:00.242 [-t time in seconds] 00:10:00.242 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:00.242 [ dif_verify, , dif_generate, dif_generate_copy 00:10:00.242 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:00.242 [-l for compress/decompress workloads, name of uncompressed input file 00:10:00.242 [-S for crc32c workload, use this seed value (default 0) 00:10:00.242 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:00.242 [-f for fill workload, use this BYTE value (default 255) 00:10:00.242 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:00.242 [-y verify result if this switch is on] 00:10:00.242 [-a tasks to allocate per core (default: same value as -q)] 00:10:00.242 Can be used to spread operations across a wider range of memory. 00:10:00.242 21:06:22 -- common/autotest_common.sh@643 -- # es=1 00:10:00.242 21:06:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:00.242 21:06:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:00.242 21:06:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:00.242 00:10:00.242 real 0m0.046s 00:10:00.242 user 0m0.018s 00:10:00.242 sys 0m0.028s 00:10:00.242 21:06:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.242 21:06:22 -- common/autotest_common.sh@10 -- # set +x 00:10:00.242 ************************************ 00:10:00.242 END TEST accel_negative_buffers 00:10:00.242 ************************************ 00:10:00.242 21:06:22 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:00.242 21:06:22 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:00.242 21:06:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:00.242 21:06:22 -- common/autotest_common.sh@10 -- # set +x 00:10:00.242 ************************************ 00:10:00.242 START TEST accel_crc32c 00:10:00.242 ************************************ 00:10:00.242 21:06:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:00.242 21:06:22 -- accel/accel.sh@16 -- # local accel_opc 00:10:00.242 21:06:22 -- accel/accel.sh@17 -- # local accel_module 00:10:00.242 21:06:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:00.242 21:06:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:00.242 21:06:22 -- accel/accel.sh@12 -- # build_accel_config 00:10:00.242 21:06:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:00.242 21:06:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:00.242 21:06:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:00.242 21:06:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:00.242 21:06:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:00.242 21:06:22 -- accel/accel.sh@41 -- # local IFS=, 00:10:00.242 21:06:22 -- accel/accel.sh@42 -- # jq -r . 00:10:00.242 [2024-06-07 21:06:22.812858] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:00.242 [2024-06-07 21:06:22.813142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119887 ] 00:10:00.501 [2024-06-07 21:06:22.979343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.501 [2024-06-07 21:06:23.057653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.874 21:06:24 -- accel/accel.sh@18 -- # out=' 00:10:01.874 SPDK Configuration: 00:10:01.874 Core mask: 0x1 00:10:01.874 00:10:01.874 Accel Perf Configuration: 00:10:01.874 Workload Type: crc32c 00:10:01.874 CRC-32C seed: 32 00:10:01.874 Transfer size: 4096 bytes 00:10:01.874 Vector count 1 00:10:01.874 Module: software 00:10:01.874 Queue depth: 32 00:10:01.874 Allocate depth: 32 00:10:01.874 # threads/core: 1 00:10:01.874 Run time: 1 seconds 00:10:01.874 Verify: Yes 00:10:01.874 00:10:01.874 Running for 1 seconds... 00:10:01.874 00:10:01.874 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:01.874 ------------------------------------------------------------------------------------ 00:10:01.874 0,0 477184/s 1864 MiB/s 0 0 00:10:01.874 ==================================================================================== 00:10:01.874 Total 477184/s 1864 MiB/s 0 0' 00:10:01.874 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:01.874 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:01.874 21:06:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:01.874 21:06:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:01.874 21:06:24 -- accel/accel.sh@12 -- # build_accel_config 00:10:01.874 21:06:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:01.874 21:06:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:01.874 21:06:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:01.874 21:06:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:01.874 21:06:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:01.874 21:06:24 -- accel/accel.sh@41 -- # local IFS=, 00:10:01.874 21:06:24 -- accel/accel.sh@42 -- # jq -r . 00:10:01.874 [2024-06-07 21:06:24.337386] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:01.874 [2024-06-07 21:06:24.337617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119914 ] 00:10:01.874 [2024-06-07 21:06:24.489429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.133 [2024-06-07 21:06:24.588908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.133 21:06:24 -- accel/accel.sh@21 -- # val= 00:10:02.133 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.133 21:06:24 -- accel/accel.sh@21 -- # val= 00:10:02.133 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.133 21:06:24 -- accel/accel.sh@21 -- # val=0x1 00:10:02.133 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.133 21:06:24 -- accel/accel.sh@21 -- # val= 00:10:02.133 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.133 21:06:24 -- accel/accel.sh@21 -- # val= 00:10:02.133 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.133 21:06:24 -- accel/accel.sh@21 -- # val=crc32c 00:10:02.133 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.133 21:06:24 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.133 21:06:24 -- accel/accel.sh@21 -- # val=32 00:10:02.133 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.133 21:06:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:02.133 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.133 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.133 21:06:24 -- accel/accel.sh@21 -- # val= 00:10:02.134 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.134 21:06:24 -- accel/accel.sh@21 -- # val=software 00:10:02.134 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.134 21:06:24 -- accel/accel.sh@23 -- # accel_module=software 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.134 21:06:24 -- accel/accel.sh@21 -- # val=32 00:10:02.134 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.134 21:06:24 -- accel/accel.sh@21 -- # val=32 00:10:02.134 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.134 21:06:24 -- accel/accel.sh@21 -- # val=1 00:10:02.134 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.134 21:06:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:02.134 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.134 21:06:24 -- accel/accel.sh@21 -- # val=Yes 00:10:02.134 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.134 21:06:24 -- accel/accel.sh@21 -- # val= 00:10:02.134 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:02.134 21:06:24 -- accel/accel.sh@21 -- # val= 00:10:02.134 21:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # IFS=: 00:10:02.134 21:06:24 -- accel/accel.sh@20 -- # read -r var val 00:10:03.527 21:06:25 -- accel/accel.sh@21 -- # val= 00:10:03.527 21:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.527 21:06:25 -- accel/accel.sh@20 -- # IFS=: 00:10:03.527 21:06:25 -- accel/accel.sh@20 -- # read -r var val 00:10:03.527 21:06:25 -- accel/accel.sh@21 -- # val= 00:10:03.527 21:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.527 21:06:25 -- accel/accel.sh@20 -- # IFS=: 00:10:03.527 21:06:25 -- accel/accel.sh@20 -- # read -r var val 00:10:03.527 21:06:25 -- accel/accel.sh@21 -- # val= 00:10:03.527 21:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.527 21:06:25 -- accel/accel.sh@20 -- # IFS=: 00:10:03.527 21:06:25 -- accel/accel.sh@20 -- # read -r var val 00:10:03.527 21:06:25 -- accel/accel.sh@21 -- # val= 00:10:03.527 21:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.527 21:06:25 -- accel/accel.sh@20 -- # IFS=: 00:10:03.527 21:06:25 -- accel/accel.sh@20 -- # read -r var val 00:10:03.527 21:06:25 -- accel/accel.sh@21 -- # val= 00:10:03.527 21:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.527 21:06:25 -- accel/accel.sh@20 -- # IFS=: 00:10:03.527 21:06:25 -- accel/accel.sh@20 -- # read -r var val 00:10:03.527 21:06:25 -- accel/accel.sh@21 -- # val= 00:10:03.527 21:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.527 21:06:25 -- accel/accel.sh@20 -- # IFS=: 00:10:03.527 21:06:25 -- accel/accel.sh@20 -- # read -r var val 00:10:03.527 21:06:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:03.527 21:06:25 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:03.527 21:06:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:03.527 00:10:03.527 real 0m3.063s 00:10:03.527 user 0m2.621s 00:10:03.527 sys 0m0.300s 00:10:03.527 21:06:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.527 ************************************ 00:10:03.527 END TEST accel_crc32c 00:10:03.527 ************************************ 00:10:03.527 21:06:25 -- common/autotest_common.sh@10 -- # set +x 00:10:03.527 21:06:25 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:03.527 21:06:25 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:03.527 21:06:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:03.527 21:06:25 -- common/autotest_common.sh@10 -- # set +x 00:10:03.527 ************************************ 00:10:03.527 START TEST accel_crc32c_C2 00:10:03.527 ************************************ 00:10:03.527 21:06:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:03.527 21:06:25 -- accel/accel.sh@16 -- # local accel_opc 00:10:03.527 21:06:25 -- accel/accel.sh@17 -- # local accel_module 00:10:03.527 21:06:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:03.527 21:06:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:03.527 21:06:25 -- accel/accel.sh@12 -- # build_accel_config 00:10:03.527 21:06:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:03.527 21:06:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:03.527 21:06:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:03.527 21:06:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:03.527 21:06:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:03.527 21:06:25 -- accel/accel.sh@41 -- # local IFS=, 00:10:03.527 21:06:25 -- accel/accel.sh@42 -- # jq -r . 00:10:03.527 [2024-06-07 21:06:25.925628] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:03.527 [2024-06-07 21:06:25.925846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119956 ] 00:10:03.527 [2024-06-07 21:06:26.077901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.527 [2024-06-07 21:06:26.157830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.903 21:06:27 -- accel/accel.sh@18 -- # out=' 00:10:04.903 SPDK Configuration: 00:10:04.903 Core mask: 0x1 00:10:04.903 00:10:04.903 Accel Perf Configuration: 00:10:04.903 Workload Type: crc32c 00:10:04.903 CRC-32C seed: 0 00:10:04.903 Transfer size: 4096 bytes 00:10:04.903 Vector count 2 00:10:04.903 Module: software 00:10:04.903 Queue depth: 32 00:10:04.903 Allocate depth: 32 00:10:04.903 # threads/core: 1 00:10:04.903 Run time: 1 seconds 00:10:04.903 Verify: Yes 00:10:04.903 00:10:04.903 Running for 1 seconds... 00:10:04.903 00:10:04.903 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:04.903 ------------------------------------------------------------------------------------ 00:10:04.903 0,0 359136/s 2805 MiB/s 0 0 00:10:04.903 ==================================================================================== 00:10:04.903 Total 359136/s 1402 MiB/s 0 0' 00:10:04.903 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:04.903 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:04.903 21:06:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:04.903 21:06:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:04.903 21:06:27 -- accel/accel.sh@12 -- # build_accel_config 00:10:04.903 21:06:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:04.903 21:06:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:04.903 21:06:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:04.903 21:06:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:04.903 21:06:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:04.903 21:06:27 -- accel/accel.sh@41 -- # local IFS=, 00:10:04.903 21:06:27 -- accel/accel.sh@42 -- # jq -r . 00:10:04.903 [2024-06-07 21:06:27.438176] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:04.903 [2024-06-07 21:06:27.438504] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119982 ] 00:10:05.161 [2024-06-07 21:06:27.612776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.161 [2024-06-07 21:06:27.704694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val= 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val= 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val=0x1 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val= 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val= 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val=crc32c 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val=0 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val= 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val=software 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@23 -- # accel_module=software 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val=32 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val=32 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val=1 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val=Yes 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val= 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:05.161 21:06:27 -- accel/accel.sh@21 -- # val= 00:10:05.161 21:06:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # IFS=: 00:10:05.161 21:06:27 -- accel/accel.sh@20 -- # read -r var val 00:10:06.536 21:06:28 -- accel/accel.sh@21 -- # val= 00:10:06.536 21:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.536 21:06:28 -- accel/accel.sh@20 -- # IFS=: 00:10:06.536 21:06:28 -- accel/accel.sh@20 -- # read -r var val 00:10:06.536 21:06:28 -- accel/accel.sh@21 -- # val= 00:10:06.536 21:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.536 21:06:28 -- accel/accel.sh@20 -- # IFS=: 00:10:06.536 21:06:28 -- accel/accel.sh@20 -- # read -r var val 00:10:06.536 21:06:28 -- accel/accel.sh@21 -- # val= 00:10:06.536 21:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.536 21:06:28 -- accel/accel.sh@20 -- # IFS=: 00:10:06.536 21:06:28 -- accel/accel.sh@20 -- # read -r var val 00:10:06.536 21:06:28 -- accel/accel.sh@21 -- # val= 00:10:06.536 21:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.536 21:06:28 -- accel/accel.sh@20 -- # IFS=: 00:10:06.536 21:06:28 -- accel/accel.sh@20 -- # read -r var val 00:10:06.536 21:06:28 -- accel/accel.sh@21 -- # val= 00:10:06.536 21:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.536 21:06:28 -- accel/accel.sh@20 -- # IFS=: 00:10:06.536 21:06:28 -- accel/accel.sh@20 -- # read -r var val 00:10:06.536 21:06:28 -- accel/accel.sh@21 -- # val= 00:10:06.536 21:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.536 21:06:28 -- accel/accel.sh@20 -- # IFS=: 00:10:06.536 21:06:28 -- accel/accel.sh@20 -- # read -r var val 00:10:06.536 ************************************ 00:10:06.536 END TEST accel_crc32c_C2 00:10:06.536 ************************************ 00:10:06.536 21:06:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:06.536 21:06:28 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:06.536 21:06:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:06.536 00:10:06.536 real 0m3.064s 00:10:06.536 user 0m2.659s 00:10:06.536 sys 0m0.258s 00:10:06.536 21:06:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.536 21:06:28 -- common/autotest_common.sh@10 -- # set +x 00:10:06.536 21:06:28 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:06.536 21:06:28 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:06.536 21:06:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:06.536 21:06:28 -- common/autotest_common.sh@10 -- # set +x 00:10:06.536 ************************************ 00:10:06.536 START TEST accel_copy 00:10:06.536 ************************************ 00:10:06.536 21:06:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:10:06.536 21:06:29 -- accel/accel.sh@16 -- # local accel_opc 00:10:06.536 21:06:29 -- accel/accel.sh@17 -- # local accel_module 00:10:06.536 21:06:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:06.536 21:06:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:06.536 21:06:29 -- accel/accel.sh@12 -- # build_accel_config 00:10:06.536 21:06:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:06.536 21:06:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:06.536 21:06:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:06.536 21:06:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:06.536 21:06:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:06.536 21:06:29 -- accel/accel.sh@41 -- # local IFS=, 00:10:06.536 21:06:29 -- accel/accel.sh@42 -- # jq -r . 00:10:06.536 [2024-06-07 21:06:29.043075] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:06.536 [2024-06-07 21:06:29.043350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120025 ] 00:10:06.536 [2024-06-07 21:06:29.212001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.795 [2024-06-07 21:06:29.296279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.170 21:06:30 -- accel/accel.sh@18 -- # out=' 00:10:08.170 SPDK Configuration: 00:10:08.170 Core mask: 0x1 00:10:08.170 00:10:08.170 Accel Perf Configuration: 00:10:08.170 Workload Type: copy 00:10:08.170 Transfer size: 4096 bytes 00:10:08.170 Vector count 1 00:10:08.170 Module: software 00:10:08.170 Queue depth: 32 00:10:08.170 Allocate depth: 32 00:10:08.170 # threads/core: 1 00:10:08.170 Run time: 1 seconds 00:10:08.170 Verify: Yes 00:10:08.170 00:10:08.170 Running for 1 seconds... 00:10:08.170 00:10:08.170 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:08.170 ------------------------------------------------------------------------------------ 00:10:08.170 0,0 285888/s 1116 MiB/s 0 0 00:10:08.170 ==================================================================================== 00:10:08.170 Total 285888/s 1116 MiB/s 0 0' 00:10:08.170 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.170 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.170 21:06:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:08.170 21:06:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:08.170 21:06:30 -- accel/accel.sh@12 -- # build_accel_config 00:10:08.170 21:06:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:08.170 21:06:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:08.170 21:06:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:08.170 21:06:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:08.170 21:06:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:08.170 21:06:30 -- accel/accel.sh@41 -- # local IFS=, 00:10:08.170 21:06:30 -- accel/accel.sh@42 -- # jq -r . 00:10:08.170 [2024-06-07 21:06:30.588414] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:08.170 [2024-06-07 21:06:30.588690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120047 ] 00:10:08.170 [2024-06-07 21:06:30.756072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.170 [2024-06-07 21:06:30.846329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val= 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val= 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val=0x1 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val= 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val= 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val=copy 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val= 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val=software 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@23 -- # accel_module=software 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val=32 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val=32 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val=1 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val=Yes 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val= 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:08.429 21:06:30 -- accel/accel.sh@21 -- # val= 00:10:08.429 21:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # IFS=: 00:10:08.429 21:06:30 -- accel/accel.sh@20 -- # read -r var val 00:10:09.806 21:06:32 -- accel/accel.sh@21 -- # val= 00:10:09.806 21:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.806 21:06:32 -- accel/accel.sh@20 -- # IFS=: 00:10:09.806 21:06:32 -- accel/accel.sh@20 -- # read -r var val 00:10:09.806 21:06:32 -- accel/accel.sh@21 -- # val= 00:10:09.806 21:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.806 21:06:32 -- accel/accel.sh@20 -- # IFS=: 00:10:09.806 21:06:32 -- accel/accel.sh@20 -- # read -r var val 00:10:09.806 21:06:32 -- accel/accel.sh@21 -- # val= 00:10:09.806 21:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.806 21:06:32 -- accel/accel.sh@20 -- # IFS=: 00:10:09.806 21:06:32 -- accel/accel.sh@20 -- # read -r var val 00:10:09.806 21:06:32 -- accel/accel.sh@21 -- # val= 00:10:09.806 21:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.806 21:06:32 -- accel/accel.sh@20 -- # IFS=: 00:10:09.806 21:06:32 -- accel/accel.sh@20 -- # read -r var val 00:10:09.806 21:06:32 -- accel/accel.sh@21 -- # val= 00:10:09.806 21:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.806 21:06:32 -- accel/accel.sh@20 -- # IFS=: 00:10:09.806 21:06:32 -- accel/accel.sh@20 -- # read -r var val 00:10:09.806 21:06:32 -- accel/accel.sh@21 -- # val= 00:10:09.806 21:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.806 21:06:32 -- accel/accel.sh@20 -- # IFS=: 00:10:09.806 21:06:32 -- accel/accel.sh@20 -- # read -r var val 00:10:09.806 21:06:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:09.806 ************************************ 00:10:09.806 END TEST accel_copy 00:10:09.806 ************************************ 00:10:09.806 21:06:32 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:09.806 21:06:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:09.806 00:10:09.806 real 0m3.087s 00:10:09.806 user 0m2.636s 00:10:09.806 sys 0m0.299s 00:10:09.806 21:06:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.806 21:06:32 -- common/autotest_common.sh@10 -- # set +x 00:10:09.806 21:06:32 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:09.806 21:06:32 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:09.806 21:06:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:09.806 21:06:32 -- common/autotest_common.sh@10 -- # set +x 00:10:09.806 ************************************ 00:10:09.806 START TEST accel_fill 00:10:09.806 ************************************ 00:10:09.806 21:06:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:09.806 21:06:32 -- accel/accel.sh@16 -- # local accel_opc 00:10:09.806 21:06:32 -- accel/accel.sh@17 -- # local accel_module 00:10:09.806 21:06:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:09.806 21:06:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:09.806 21:06:32 -- accel/accel.sh@12 -- # build_accel_config 00:10:09.806 21:06:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:09.806 21:06:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:09.806 21:06:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:09.806 21:06:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:09.806 21:06:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:09.806 21:06:32 -- accel/accel.sh@41 -- # local IFS=, 00:10:09.806 21:06:32 -- accel/accel.sh@42 -- # jq -r . 00:10:09.806 [2024-06-07 21:06:32.191062] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:09.806 [2024-06-07 21:06:32.191485] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120094 ] 00:10:09.806 [2024-06-07 21:06:32.361599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.806 [2024-06-07 21:06:32.452947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.181 21:06:33 -- accel/accel.sh@18 -- # out=' 00:10:11.181 SPDK Configuration: 00:10:11.181 Core mask: 0x1 00:10:11.181 00:10:11.181 Accel Perf Configuration: 00:10:11.181 Workload Type: fill 00:10:11.181 Fill pattern: 0x80 00:10:11.181 Transfer size: 4096 bytes 00:10:11.181 Vector count 1 00:10:11.181 Module: software 00:10:11.181 Queue depth: 64 00:10:11.181 Allocate depth: 64 00:10:11.181 # threads/core: 1 00:10:11.181 Run time: 1 seconds 00:10:11.181 Verify: Yes 00:10:11.181 00:10:11.181 Running for 1 seconds... 00:10:11.181 00:10:11.181 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:11.181 ------------------------------------------------------------------------------------ 00:10:11.181 0,0 405568/s 1584 MiB/s 0 0 00:10:11.181 ==================================================================================== 00:10:11.181 Total 405568/s 1584 MiB/s 0 0' 00:10:11.181 21:06:33 -- accel/accel.sh@20 -- # IFS=: 00:10:11.181 21:06:33 -- accel/accel.sh@20 -- # read -r var val 00:10:11.181 21:06:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:11.181 21:06:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:11.181 21:06:33 -- accel/accel.sh@12 -- # build_accel_config 00:10:11.181 21:06:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:11.181 21:06:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.181 21:06:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.181 21:06:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:11.181 21:06:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:11.181 21:06:33 -- accel/accel.sh@41 -- # local IFS=, 00:10:11.181 21:06:33 -- accel/accel.sh@42 -- # jq -r . 00:10:11.181 [2024-06-07 21:06:33.756365] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:11.181 [2024-06-07 21:06:33.756773] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120143 ] 00:10:11.440 [2024-06-07 21:06:33.922658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.440 [2024-06-07 21:06:34.012997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val= 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val= 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val=0x1 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val= 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val= 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val=fill 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val=0x80 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val= 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val=software 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@23 -- # accel_module=software 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val=64 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val=64 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val=1 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val=Yes 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val= 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:11.440 21:06:34 -- accel/accel.sh@21 -- # val= 00:10:11.440 21:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # IFS=: 00:10:11.440 21:06:34 -- accel/accel.sh@20 -- # read -r var val 00:10:12.816 21:06:35 -- accel/accel.sh@21 -- # val= 00:10:12.816 21:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.816 21:06:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.816 21:06:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.816 21:06:35 -- accel/accel.sh@21 -- # val= 00:10:12.816 21:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.816 21:06:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.816 21:06:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.816 21:06:35 -- accel/accel.sh@21 -- # val= 00:10:12.816 21:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.816 21:06:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.816 21:06:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.816 21:06:35 -- accel/accel.sh@21 -- # val= 00:10:12.816 21:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.816 21:06:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.816 21:06:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.816 21:06:35 -- accel/accel.sh@21 -- # val= 00:10:12.816 21:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.816 21:06:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.816 21:06:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.816 21:06:35 -- accel/accel.sh@21 -- # val= 00:10:12.816 21:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.816 21:06:35 -- accel/accel.sh@20 -- # IFS=: 00:10:12.816 21:06:35 -- accel/accel.sh@20 -- # read -r var val 00:10:12.816 ************************************ 00:10:12.816 END TEST accel_fill 00:10:12.816 ************************************ 00:10:12.816 21:06:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:12.816 21:06:35 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:12.816 21:06:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:12.816 00:10:12.816 real 0m3.126s 00:10:12.816 user 0m2.655s 00:10:12.816 sys 0m0.333s 00:10:12.816 21:06:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.817 21:06:35 -- common/autotest_common.sh@10 -- # set +x 00:10:12.817 21:06:35 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:12.817 21:06:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:12.817 21:06:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:12.817 21:06:35 -- common/autotest_common.sh@10 -- # set +x 00:10:12.817 ************************************ 00:10:12.817 START TEST accel_copy_crc32c 00:10:12.817 ************************************ 00:10:12.817 21:06:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:10:12.817 21:06:35 -- accel/accel.sh@16 -- # local accel_opc 00:10:12.817 21:06:35 -- accel/accel.sh@17 -- # local accel_module 00:10:12.817 21:06:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:12.817 21:06:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:12.817 21:06:35 -- accel/accel.sh@12 -- # build_accel_config 00:10:12.817 21:06:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:12.817 21:06:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:12.817 21:06:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:12.817 21:06:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:12.817 21:06:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:12.817 21:06:35 -- accel/accel.sh@41 -- # local IFS=, 00:10:12.817 21:06:35 -- accel/accel.sh@42 -- # jq -r . 00:10:12.817 [2024-06-07 21:06:35.359845] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:12.817 [2024-06-07 21:06:35.360201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120182 ] 00:10:13.075 [2024-06-07 21:06:35.514630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.075 [2024-06-07 21:06:35.602437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.453 21:06:36 -- accel/accel.sh@18 -- # out=' 00:10:14.453 SPDK Configuration: 00:10:14.453 Core mask: 0x1 00:10:14.453 00:10:14.453 Accel Perf Configuration: 00:10:14.453 Workload Type: copy_crc32c 00:10:14.453 CRC-32C seed: 0 00:10:14.453 Vector size: 4096 bytes 00:10:14.453 Transfer size: 4096 bytes 00:10:14.453 Vector count 1 00:10:14.453 Module: software 00:10:14.453 Queue depth: 32 00:10:14.453 Allocate depth: 32 00:10:14.453 # threads/core: 1 00:10:14.453 Run time: 1 seconds 00:10:14.453 Verify: Yes 00:10:14.453 00:10:14.453 Running for 1 seconds... 00:10:14.453 00:10:14.453 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:14.453 ------------------------------------------------------------------------------------ 00:10:14.453 0,0 213376/s 833 MiB/s 0 0 00:10:14.453 ==================================================================================== 00:10:14.453 Total 213376/s 833 MiB/s 0 0' 00:10:14.453 21:06:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:14.453 21:06:36 -- accel/accel.sh@20 -- # IFS=: 00:10:14.453 21:06:36 -- accel/accel.sh@20 -- # read -r var val 00:10:14.453 21:06:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:14.453 21:06:36 -- accel/accel.sh@12 -- # build_accel_config 00:10:14.453 21:06:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:14.453 21:06:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:14.453 21:06:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:14.453 21:06:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:14.453 21:06:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:14.453 21:06:36 -- accel/accel.sh@41 -- # local IFS=, 00:10:14.453 21:06:36 -- accel/accel.sh@42 -- # jq -r . 00:10:14.453 [2024-06-07 21:06:36.913137] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:14.453 [2024-06-07 21:06:36.913593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120211 ] 00:10:14.453 [2024-06-07 21:06:37.082591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.713 [2024-06-07 21:06:37.178558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val= 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val= 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val=0x1 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val= 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val= 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val=0 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val= 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val=software 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@23 -- # accel_module=software 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val=32 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val=32 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val=1 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val=Yes 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val= 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:14.713 21:06:37 -- accel/accel.sh@21 -- # val= 00:10:14.713 21:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # IFS=: 00:10:14.713 21:06:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.091 21:06:38 -- accel/accel.sh@21 -- # val= 00:10:16.091 21:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.091 21:06:38 -- accel/accel.sh@20 -- # IFS=: 00:10:16.091 21:06:38 -- accel/accel.sh@20 -- # read -r var val 00:10:16.091 21:06:38 -- accel/accel.sh@21 -- # val= 00:10:16.091 21:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.091 21:06:38 -- accel/accel.sh@20 -- # IFS=: 00:10:16.091 21:06:38 -- accel/accel.sh@20 -- # read -r var val 00:10:16.091 21:06:38 -- accel/accel.sh@21 -- # val= 00:10:16.091 21:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.091 21:06:38 -- accel/accel.sh@20 -- # IFS=: 00:10:16.091 21:06:38 -- accel/accel.sh@20 -- # read -r var val 00:10:16.091 21:06:38 -- accel/accel.sh@21 -- # val= 00:10:16.091 21:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.091 21:06:38 -- accel/accel.sh@20 -- # IFS=: 00:10:16.091 21:06:38 -- accel/accel.sh@20 -- # read -r var val 00:10:16.091 21:06:38 -- accel/accel.sh@21 -- # val= 00:10:16.091 21:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.091 21:06:38 -- accel/accel.sh@20 -- # IFS=: 00:10:16.091 21:06:38 -- accel/accel.sh@20 -- # read -r var val 00:10:16.091 21:06:38 -- accel/accel.sh@21 -- # val= 00:10:16.091 21:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.091 21:06:38 -- accel/accel.sh@20 -- # IFS=: 00:10:16.091 21:06:38 -- accel/accel.sh@20 -- # read -r var val 00:10:16.091 21:06:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:16.091 21:06:38 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:16.091 21:06:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:16.091 00:10:16.091 real 0m3.125s 00:10:16.091 user 0m2.672s 00:10:16.091 sys 0m0.321s 00:10:16.091 21:06:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.091 21:06:38 -- common/autotest_common.sh@10 -- # set +x 00:10:16.091 ************************************ 00:10:16.091 END TEST accel_copy_crc32c 00:10:16.091 ************************************ 00:10:16.091 21:06:38 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:16.091 21:06:38 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:16.091 21:06:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:16.091 21:06:38 -- common/autotest_common.sh@10 -- # set +x 00:10:16.091 ************************************ 00:10:16.091 START TEST accel_copy_crc32c_C2 00:10:16.091 ************************************ 00:10:16.091 21:06:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:16.091 21:06:38 -- accel/accel.sh@16 -- # local accel_opc 00:10:16.091 21:06:38 -- accel/accel.sh@17 -- # local accel_module 00:10:16.091 21:06:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:16.091 21:06:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:16.091 21:06:38 -- accel/accel.sh@12 -- # build_accel_config 00:10:16.091 21:06:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:16.091 21:06:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.091 21:06:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.091 21:06:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:16.091 21:06:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:16.091 21:06:38 -- accel/accel.sh@41 -- # local IFS=, 00:10:16.091 21:06:38 -- accel/accel.sh@42 -- # jq -r . 00:10:16.091 [2024-06-07 21:06:38.548164] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:16.091 [2024-06-07 21:06:38.548581] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120251 ] 00:10:16.091 [2024-06-07 21:06:38.715452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.350 [2024-06-07 21:06:38.817908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.727 21:06:40 -- accel/accel.sh@18 -- # out=' 00:10:17.727 SPDK Configuration: 00:10:17.727 Core mask: 0x1 00:10:17.727 00:10:17.727 Accel Perf Configuration: 00:10:17.727 Workload Type: copy_crc32c 00:10:17.727 CRC-32C seed: 0 00:10:17.727 Vector size: 4096 bytes 00:10:17.727 Transfer size: 8192 bytes 00:10:17.727 Vector count 2 00:10:17.727 Module: software 00:10:17.727 Queue depth: 32 00:10:17.727 Allocate depth: 32 00:10:17.727 # threads/core: 1 00:10:17.727 Run time: 1 seconds 00:10:17.727 Verify: Yes 00:10:17.727 00:10:17.727 Running for 1 seconds... 00:10:17.727 00:10:17.727 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:17.727 ------------------------------------------------------------------------------------ 00:10:17.727 0,0 161824/s 1264 MiB/s 0 0 00:10:17.727 ==================================================================================== 00:10:17.727 Total 161824/s 632 MiB/s 0 0' 00:10:17.727 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.727 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.727 21:06:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:17.727 21:06:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:17.727 21:06:40 -- accel/accel.sh@12 -- # build_accel_config 00:10:17.727 21:06:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:17.727 21:06:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:17.727 21:06:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:17.727 21:06:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:17.727 21:06:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:17.727 21:06:40 -- accel/accel.sh@41 -- # local IFS=, 00:10:17.727 21:06:40 -- accel/accel.sh@42 -- # jq -r . 00:10:17.727 [2024-06-07 21:06:40.121272] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:17.727 [2024-06-07 21:06:40.121792] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120280 ] 00:10:17.727 [2024-06-07 21:06:40.287074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.727 [2024-06-07 21:06:40.371343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.985 21:06:40 -- accel/accel.sh@21 -- # val= 00:10:17.985 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.985 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.985 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.985 21:06:40 -- accel/accel.sh@21 -- # val= 00:10:17.985 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.985 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.985 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.985 21:06:40 -- accel/accel.sh@21 -- # val=0x1 00:10:17.985 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.986 21:06:40 -- accel/accel.sh@21 -- # val= 00:10:17.986 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.986 21:06:40 -- accel/accel.sh@21 -- # val= 00:10:17.986 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.986 21:06:40 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:17.986 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.986 21:06:40 -- accel/accel.sh@21 -- # val=0 00:10:17.986 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.986 21:06:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:17.986 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.986 21:06:40 -- accel/accel.sh@21 -- # val='8192 bytes' 00:10:17.986 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.986 21:06:40 -- accel/accel.sh@21 -- # val= 00:10:17.986 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.986 21:06:40 -- accel/accel.sh@21 -- # val=software 00:10:17.986 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@23 -- # accel_module=software 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.986 21:06:40 -- accel/accel.sh@21 -- # val=32 00:10:17.986 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.986 21:06:40 -- accel/accel.sh@21 -- # val=32 00:10:17.986 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.986 21:06:40 -- accel/accel.sh@21 -- # val=1 00:10:17.986 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.986 21:06:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:17.986 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.986 21:06:40 -- accel/accel.sh@21 -- # val=Yes 00:10:17.986 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.986 21:06:40 -- accel/accel.sh@21 -- # val= 00:10:17.986 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:17.986 21:06:40 -- accel/accel.sh@21 -- # val= 00:10:17.986 21:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # IFS=: 00:10:17.986 21:06:40 -- accel/accel.sh@20 -- # read -r var val 00:10:19.364 21:06:41 -- accel/accel.sh@21 -- # val= 00:10:19.364 21:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.364 21:06:41 -- accel/accel.sh@20 -- # IFS=: 00:10:19.364 21:06:41 -- accel/accel.sh@20 -- # read -r var val 00:10:19.364 21:06:41 -- accel/accel.sh@21 -- # val= 00:10:19.364 21:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.364 21:06:41 -- accel/accel.sh@20 -- # IFS=: 00:10:19.364 21:06:41 -- accel/accel.sh@20 -- # read -r var val 00:10:19.364 21:06:41 -- accel/accel.sh@21 -- # val= 00:10:19.364 21:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.364 21:06:41 -- accel/accel.sh@20 -- # IFS=: 00:10:19.364 21:06:41 -- accel/accel.sh@20 -- # read -r var val 00:10:19.364 21:06:41 -- accel/accel.sh@21 -- # val= 00:10:19.364 21:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.364 21:06:41 -- accel/accel.sh@20 -- # IFS=: 00:10:19.364 21:06:41 -- accel/accel.sh@20 -- # read -r var val 00:10:19.364 21:06:41 -- accel/accel.sh@21 -- # val= 00:10:19.364 21:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.364 21:06:41 -- accel/accel.sh@20 -- # IFS=: 00:10:19.364 21:06:41 -- accel/accel.sh@20 -- # read -r var val 00:10:19.364 21:06:41 -- accel/accel.sh@21 -- # val= 00:10:19.364 21:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.364 21:06:41 -- accel/accel.sh@20 -- # IFS=: 00:10:19.364 21:06:41 -- accel/accel.sh@20 -- # read -r var val 00:10:19.364 ************************************ 00:10:19.364 END TEST accel_copy_crc32c_C2 00:10:19.364 ************************************ 00:10:19.364 21:06:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:19.364 21:06:41 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:19.364 21:06:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:19.364 00:10:19.364 real 0m3.144s 00:10:19.364 user 0m2.679s 00:10:19.364 sys 0m0.315s 00:10:19.364 21:06:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.364 21:06:41 -- common/autotest_common.sh@10 -- # set +x 00:10:19.364 21:06:41 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:19.364 21:06:41 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:19.364 21:06:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:19.364 21:06:41 -- common/autotest_common.sh@10 -- # set +x 00:10:19.364 ************************************ 00:10:19.364 START TEST accel_dualcast 00:10:19.364 ************************************ 00:10:19.364 21:06:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:10:19.364 21:06:41 -- accel/accel.sh@16 -- # local accel_opc 00:10:19.364 21:06:41 -- accel/accel.sh@17 -- # local accel_module 00:10:19.364 21:06:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:10:19.364 21:06:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:19.364 21:06:41 -- accel/accel.sh@12 -- # build_accel_config 00:10:19.364 21:06:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:19.364 21:06:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:19.364 21:06:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:19.364 21:06:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:19.364 21:06:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:19.364 21:06:41 -- accel/accel.sh@41 -- # local IFS=, 00:10:19.364 21:06:41 -- accel/accel.sh@42 -- # jq -r . 00:10:19.364 [2024-06-07 21:06:41.741387] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:19.364 [2024-06-07 21:06:41.741781] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120321 ] 00:10:19.364 [2024-06-07 21:06:41.913219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.364 [2024-06-07 21:06:42.017588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.740 21:06:43 -- accel/accel.sh@18 -- # out=' 00:10:20.740 SPDK Configuration: 00:10:20.740 Core mask: 0x1 00:10:20.740 00:10:20.740 Accel Perf Configuration: 00:10:20.740 Workload Type: dualcast 00:10:20.740 Transfer size: 4096 bytes 00:10:20.740 Vector count 1 00:10:20.740 Module: software 00:10:20.740 Queue depth: 32 00:10:20.740 Allocate depth: 32 00:10:20.740 # threads/core: 1 00:10:20.740 Run time: 1 seconds 00:10:20.740 Verify: Yes 00:10:20.740 00:10:20.740 Running for 1 seconds... 00:10:20.740 00:10:20.740 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:20.740 ------------------------------------------------------------------------------------ 00:10:20.740 0,0 262720/s 1026 MiB/s 0 0 00:10:20.740 ==================================================================================== 00:10:20.741 Total 262720/s 1026 MiB/s 0 0' 00:10:20.741 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:20.741 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:20.741 21:06:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:20.741 21:06:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:20.741 21:06:43 -- accel/accel.sh@12 -- # build_accel_config 00:10:20.741 21:06:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:20.741 21:06:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:20.741 21:06:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:20.741 21:06:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:20.741 21:06:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:20.741 21:06:43 -- accel/accel.sh@41 -- # local IFS=, 00:10:20.741 21:06:43 -- accel/accel.sh@42 -- # jq -r . 00:10:20.741 [2024-06-07 21:06:43.320184] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:20.741 [2024-06-07 21:06:43.320635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120366 ] 00:10:20.999 [2024-06-07 21:06:43.493030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.999 [2024-06-07 21:06:43.604831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val= 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val= 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val=0x1 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val= 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val= 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val=dualcast 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val= 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val=software 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@23 -- # accel_module=software 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val=32 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val=32 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val=1 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val=Yes 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val= 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:21.257 21:06:43 -- accel/accel.sh@21 -- # val= 00:10:21.257 21:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # IFS=: 00:10:21.257 21:06:43 -- accel/accel.sh@20 -- # read -r var val 00:10:22.633 21:06:44 -- accel/accel.sh@21 -- # val= 00:10:22.633 21:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.633 21:06:44 -- accel/accel.sh@20 -- # IFS=: 00:10:22.633 21:06:44 -- accel/accel.sh@20 -- # read -r var val 00:10:22.633 21:06:44 -- accel/accel.sh@21 -- # val= 00:10:22.633 21:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.633 21:06:44 -- accel/accel.sh@20 -- # IFS=: 00:10:22.633 21:06:44 -- accel/accel.sh@20 -- # read -r var val 00:10:22.633 21:06:44 -- accel/accel.sh@21 -- # val= 00:10:22.633 21:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.633 21:06:44 -- accel/accel.sh@20 -- # IFS=: 00:10:22.633 21:06:44 -- accel/accel.sh@20 -- # read -r var val 00:10:22.633 21:06:44 -- accel/accel.sh@21 -- # val= 00:10:22.633 21:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.633 21:06:44 -- accel/accel.sh@20 -- # IFS=: 00:10:22.633 21:06:44 -- accel/accel.sh@20 -- # read -r var val 00:10:22.633 21:06:44 -- accel/accel.sh@21 -- # val= 00:10:22.633 21:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.633 21:06:44 -- accel/accel.sh@20 -- # IFS=: 00:10:22.633 21:06:44 -- accel/accel.sh@20 -- # read -r var val 00:10:22.633 21:06:44 -- accel/accel.sh@21 -- # val= 00:10:22.633 21:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.633 21:06:44 -- accel/accel.sh@20 -- # IFS=: 00:10:22.633 21:06:44 -- accel/accel.sh@20 -- # read -r var val 00:10:22.633 ************************************ 00:10:22.633 END TEST accel_dualcast 00:10:22.633 ************************************ 00:10:22.633 21:06:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:22.633 21:06:44 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:10:22.633 21:06:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:22.633 00:10:22.633 real 0m3.167s 00:10:22.633 user 0m2.694s 00:10:22.633 sys 0m0.331s 00:10:22.633 21:06:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.633 21:06:44 -- common/autotest_common.sh@10 -- # set +x 00:10:22.633 21:06:44 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:22.633 21:06:44 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:22.633 21:06:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:22.633 21:06:44 -- common/autotest_common.sh@10 -- # set +x 00:10:22.633 ************************************ 00:10:22.633 START TEST accel_compare 00:10:22.633 ************************************ 00:10:22.633 21:06:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:10:22.633 21:06:44 -- accel/accel.sh@16 -- # local accel_opc 00:10:22.633 21:06:44 -- accel/accel.sh@17 -- # local accel_module 00:10:22.633 21:06:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:10:22.633 21:06:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:22.633 21:06:44 -- accel/accel.sh@12 -- # build_accel_config 00:10:22.633 21:06:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:22.633 21:06:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:22.633 21:06:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:22.633 21:06:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:22.633 21:06:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:22.633 21:06:44 -- accel/accel.sh@41 -- # local IFS=, 00:10:22.633 21:06:44 -- accel/accel.sh@42 -- # jq -r . 00:10:22.633 [2024-06-07 21:06:44.961041] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:22.633 [2024-06-07 21:06:44.961416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120408 ] 00:10:22.633 [2024-06-07 21:06:45.128482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.633 [2024-06-07 21:06:45.240039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.008 21:06:46 -- accel/accel.sh@18 -- # out=' 00:10:24.008 SPDK Configuration: 00:10:24.008 Core mask: 0x1 00:10:24.008 00:10:24.008 Accel Perf Configuration: 00:10:24.008 Workload Type: compare 00:10:24.008 Transfer size: 4096 bytes 00:10:24.008 Vector count 1 00:10:24.008 Module: software 00:10:24.008 Queue depth: 32 00:10:24.008 Allocate depth: 32 00:10:24.008 # threads/core: 1 00:10:24.008 Run time: 1 seconds 00:10:24.008 Verify: Yes 00:10:24.008 00:10:24.008 Running for 1 seconds... 00:10:24.008 00:10:24.008 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:24.008 ------------------------------------------------------------------------------------ 00:10:24.008 0,0 370624/s 1447 MiB/s 0 0 00:10:24.008 ==================================================================================== 00:10:24.008 Total 370624/s 1447 MiB/s 0 0' 00:10:24.008 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.008 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.008 21:06:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:24.008 21:06:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:24.008 21:06:46 -- accel/accel.sh@12 -- # build_accel_config 00:10:24.008 21:06:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:24.008 21:06:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:24.008 21:06:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:24.009 21:06:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:24.009 21:06:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:24.009 21:06:46 -- accel/accel.sh@41 -- # local IFS=, 00:10:24.009 21:06:46 -- accel/accel.sh@42 -- # jq -r . 00:10:24.009 [2024-06-07 21:06:46.544795] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:24.009 [2024-06-07 21:06:46.545398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120435 ] 00:10:24.267 [2024-06-07 21:06:46.714151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.267 [2024-06-07 21:06:46.813530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val= 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val= 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val=0x1 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val= 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val= 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val=compare 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val= 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val=software 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@23 -- # accel_module=software 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val=32 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val=32 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val=1 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val=Yes 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val= 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:24.267 21:06:46 -- accel/accel.sh@21 -- # val= 00:10:24.267 21:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # IFS=: 00:10:24.267 21:06:46 -- accel/accel.sh@20 -- # read -r var val 00:10:25.674 21:06:48 -- accel/accel.sh@21 -- # val= 00:10:25.674 21:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.674 21:06:48 -- accel/accel.sh@20 -- # IFS=: 00:10:25.674 21:06:48 -- accel/accel.sh@20 -- # read -r var val 00:10:25.675 21:06:48 -- accel/accel.sh@21 -- # val= 00:10:25.675 21:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.675 21:06:48 -- accel/accel.sh@20 -- # IFS=: 00:10:25.675 21:06:48 -- accel/accel.sh@20 -- # read -r var val 00:10:25.675 21:06:48 -- accel/accel.sh@21 -- # val= 00:10:25.675 21:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.675 21:06:48 -- accel/accel.sh@20 -- # IFS=: 00:10:25.675 21:06:48 -- accel/accel.sh@20 -- # read -r var val 00:10:25.675 21:06:48 -- accel/accel.sh@21 -- # val= 00:10:25.675 21:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.675 21:06:48 -- accel/accel.sh@20 -- # IFS=: 00:10:25.675 21:06:48 -- accel/accel.sh@20 -- # read -r var val 00:10:25.675 21:06:48 -- accel/accel.sh@21 -- # val= 00:10:25.675 21:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.675 21:06:48 -- accel/accel.sh@20 -- # IFS=: 00:10:25.675 21:06:48 -- accel/accel.sh@20 -- # read -r var val 00:10:25.675 21:06:48 -- accel/accel.sh@21 -- # val= 00:10:25.675 21:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.675 21:06:48 -- accel/accel.sh@20 -- # IFS=: 00:10:25.675 21:06:48 -- accel/accel.sh@20 -- # read -r var val 00:10:25.675 ************************************ 00:10:25.675 END TEST accel_compare 00:10:25.675 ************************************ 00:10:25.675 21:06:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:25.675 21:06:48 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:25.675 21:06:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:25.675 00:10:25.675 real 0m3.178s 00:10:25.675 user 0m2.749s 00:10:25.675 sys 0m0.282s 00:10:25.675 21:06:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.675 21:06:48 -- common/autotest_common.sh@10 -- # set +x 00:10:25.675 21:06:48 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:25.675 21:06:48 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:25.675 21:06:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.675 21:06:48 -- common/autotest_common.sh@10 -- # set +x 00:10:25.675 ************************************ 00:10:25.675 START TEST accel_xor 00:10:25.675 ************************************ 00:10:25.675 21:06:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:10:25.675 21:06:48 -- accel/accel.sh@16 -- # local accel_opc 00:10:25.675 21:06:48 -- accel/accel.sh@17 -- # local accel_module 00:10:25.675 21:06:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:25.675 21:06:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:25.675 21:06:48 -- accel/accel.sh@12 -- # build_accel_config 00:10:25.675 21:06:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:25.675 21:06:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.675 21:06:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.675 21:06:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:25.675 21:06:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:25.675 21:06:48 -- accel/accel.sh@41 -- # local IFS=, 00:10:25.675 21:06:48 -- accel/accel.sh@42 -- # jq -r . 00:10:25.675 [2024-06-07 21:06:48.193576] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:25.675 [2024-06-07 21:06:48.193993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120477 ] 00:10:25.933 [2024-06-07 21:06:48.365230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.933 [2024-06-07 21:06:48.469375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.311 21:06:49 -- accel/accel.sh@18 -- # out=' 00:10:27.311 SPDK Configuration: 00:10:27.311 Core mask: 0x1 00:10:27.311 00:10:27.311 Accel Perf Configuration: 00:10:27.311 Workload Type: xor 00:10:27.311 Source buffers: 2 00:10:27.311 Transfer size: 4096 bytes 00:10:27.311 Vector count 1 00:10:27.311 Module: software 00:10:27.311 Queue depth: 32 00:10:27.311 Allocate depth: 32 00:10:27.311 # threads/core: 1 00:10:27.311 Run time: 1 seconds 00:10:27.311 Verify: Yes 00:10:27.311 00:10:27.311 Running for 1 seconds... 00:10:27.311 00:10:27.311 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:27.311 ------------------------------------------------------------------------------------ 00:10:27.311 0,0 185888/s 726 MiB/s 0 0 00:10:27.311 ==================================================================================== 00:10:27.311 Total 185888/s 726 MiB/s 0 0' 00:10:27.311 21:06:49 -- accel/accel.sh@20 -- # IFS=: 00:10:27.311 21:06:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:27.311 21:06:49 -- accel/accel.sh@20 -- # read -r var val 00:10:27.311 21:06:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:27.311 21:06:49 -- accel/accel.sh@12 -- # build_accel_config 00:10:27.311 21:06:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:27.311 21:06:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.311 21:06:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.311 21:06:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:27.311 21:06:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:27.311 21:06:49 -- accel/accel.sh@41 -- # local IFS=, 00:10:27.311 21:06:49 -- accel/accel.sh@42 -- # jq -r . 00:10:27.311 [2024-06-07 21:06:49.816263] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:27.311 [2024-06-07 21:06:49.816823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120504 ] 00:10:27.570 [2024-06-07 21:06:49.989827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.570 [2024-06-07 21:06:50.098725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val= 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val= 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val=0x1 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val= 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val= 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val=xor 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val=2 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val= 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val=software 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@23 -- # accel_module=software 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val=32 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val=32 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val=1 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val=Yes 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val= 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:27.570 21:06:50 -- accel/accel.sh@21 -- # val= 00:10:27.570 21:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # IFS=: 00:10:27.570 21:06:50 -- accel/accel.sh@20 -- # read -r var val 00:10:28.953 21:06:51 -- accel/accel.sh@21 -- # val= 00:10:28.953 21:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.953 21:06:51 -- accel/accel.sh@20 -- # IFS=: 00:10:28.953 21:06:51 -- accel/accel.sh@20 -- # read -r var val 00:10:28.953 21:06:51 -- accel/accel.sh@21 -- # val= 00:10:28.953 21:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.953 21:06:51 -- accel/accel.sh@20 -- # IFS=: 00:10:28.953 21:06:51 -- accel/accel.sh@20 -- # read -r var val 00:10:28.953 21:06:51 -- accel/accel.sh@21 -- # val= 00:10:28.953 21:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.953 21:06:51 -- accel/accel.sh@20 -- # IFS=: 00:10:28.953 21:06:51 -- accel/accel.sh@20 -- # read -r var val 00:10:28.953 21:06:51 -- accel/accel.sh@21 -- # val= 00:10:28.953 21:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.953 21:06:51 -- accel/accel.sh@20 -- # IFS=: 00:10:28.953 21:06:51 -- accel/accel.sh@20 -- # read -r var val 00:10:28.953 21:06:51 -- accel/accel.sh@21 -- # val= 00:10:28.953 21:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.953 21:06:51 -- accel/accel.sh@20 -- # IFS=: 00:10:28.953 21:06:51 -- accel/accel.sh@20 -- # read -r var val 00:10:28.953 21:06:51 -- accel/accel.sh@21 -- # val= 00:10:28.953 21:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.953 21:06:51 -- accel/accel.sh@20 -- # IFS=: 00:10:28.953 21:06:51 -- accel/accel.sh@20 -- # read -r var val 00:10:28.953 21:06:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:28.953 21:06:51 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:28.953 21:06:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:28.953 00:10:28.953 real 0m3.232s 00:10:28.953 user 0m2.773s 00:10:28.953 sys 0m0.323s 00:10:28.953 21:06:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.953 21:06:51 -- common/autotest_common.sh@10 -- # set +x 00:10:28.953 ************************************ 00:10:28.953 END TEST accel_xor 00:10:28.953 ************************************ 00:10:28.953 21:06:51 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:28.953 21:06:51 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:28.953 21:06:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:28.953 21:06:51 -- common/autotest_common.sh@10 -- # set +x 00:10:28.953 ************************************ 00:10:28.953 START TEST accel_xor 00:10:28.953 ************************************ 00:10:28.953 21:06:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:10:28.953 21:06:51 -- accel/accel.sh@16 -- # local accel_opc 00:10:28.953 21:06:51 -- accel/accel.sh@17 -- # local accel_module 00:10:28.953 21:06:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:10:28.953 21:06:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:28.953 21:06:51 -- accel/accel.sh@12 -- # build_accel_config 00:10:28.953 21:06:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:28.953 21:06:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:28.953 21:06:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:28.953 21:06:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:28.953 21:06:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:28.953 21:06:51 -- accel/accel.sh@41 -- # local IFS=, 00:10:28.953 21:06:51 -- accel/accel.sh@42 -- # jq -r . 00:10:28.953 [2024-06-07 21:06:51.483286] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:28.953 [2024-06-07 21:06:51.483721] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120546 ] 00:10:29.211 [2024-06-07 21:06:51.653997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.212 [2024-06-07 21:06:51.750947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.589 21:06:53 -- accel/accel.sh@18 -- # out=' 00:10:30.589 SPDK Configuration: 00:10:30.589 Core mask: 0x1 00:10:30.589 00:10:30.589 Accel Perf Configuration: 00:10:30.589 Workload Type: xor 00:10:30.589 Source buffers: 3 00:10:30.589 Transfer size: 4096 bytes 00:10:30.589 Vector count 1 00:10:30.589 Module: software 00:10:30.589 Queue depth: 32 00:10:30.589 Allocate depth: 32 00:10:30.589 # threads/core: 1 00:10:30.589 Run time: 1 seconds 00:10:30.589 Verify: Yes 00:10:30.589 00:10:30.589 Running for 1 seconds... 00:10:30.589 00:10:30.589 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:30.589 ------------------------------------------------------------------------------------ 00:10:30.589 0,0 175488/s 685 MiB/s 0 0 00:10:30.589 ==================================================================================== 00:10:30.589 Total 175488/s 685 MiB/s 0 0' 00:10:30.589 21:06:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:30.589 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.589 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.589 21:06:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:30.589 21:06:53 -- accel/accel.sh@12 -- # build_accel_config 00:10:30.589 21:06:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:30.589 21:06:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.589 21:06:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.589 21:06:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:30.589 21:06:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:30.589 21:06:53 -- accel/accel.sh@41 -- # local IFS=, 00:10:30.589 21:06:53 -- accel/accel.sh@42 -- # jq -r . 00:10:30.589 [2024-06-07 21:06:53.072190] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:30.589 [2024-06-07 21:06:53.072575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120578 ] 00:10:30.589 [2024-06-07 21:06:53.233732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.861 [2024-06-07 21:06:53.334911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val= 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val= 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val=0x1 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val= 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val= 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val=xor 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val=3 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val= 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val=software 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@23 -- # accel_module=software 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val=32 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val=32 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val=1 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val=Yes 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val= 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:30.861 21:06:53 -- accel/accel.sh@21 -- # val= 00:10:30.861 21:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # IFS=: 00:10:30.861 21:06:53 -- accel/accel.sh@20 -- # read -r var val 00:10:32.237 21:06:54 -- accel/accel.sh@21 -- # val= 00:10:32.237 21:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.237 21:06:54 -- accel/accel.sh@20 -- # IFS=: 00:10:32.237 21:06:54 -- accel/accel.sh@20 -- # read -r var val 00:10:32.237 21:06:54 -- accel/accel.sh@21 -- # val= 00:10:32.237 21:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.237 21:06:54 -- accel/accel.sh@20 -- # IFS=: 00:10:32.237 21:06:54 -- accel/accel.sh@20 -- # read -r var val 00:10:32.237 21:06:54 -- accel/accel.sh@21 -- # val= 00:10:32.237 21:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.237 21:06:54 -- accel/accel.sh@20 -- # IFS=: 00:10:32.237 21:06:54 -- accel/accel.sh@20 -- # read -r var val 00:10:32.237 21:06:54 -- accel/accel.sh@21 -- # val= 00:10:32.237 21:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.237 21:06:54 -- accel/accel.sh@20 -- # IFS=: 00:10:32.237 21:06:54 -- accel/accel.sh@20 -- # read -r var val 00:10:32.237 21:06:54 -- accel/accel.sh@21 -- # val= 00:10:32.237 21:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.237 21:06:54 -- accel/accel.sh@20 -- # IFS=: 00:10:32.237 21:06:54 -- accel/accel.sh@20 -- # read -r var val 00:10:32.237 21:06:54 -- accel/accel.sh@21 -- # val= 00:10:32.237 21:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.237 21:06:54 -- accel/accel.sh@20 -- # IFS=: 00:10:32.237 21:06:54 -- accel/accel.sh@20 -- # read -r var val 00:10:32.237 ************************************ 00:10:32.237 END TEST accel_xor 00:10:32.237 ************************************ 00:10:32.237 21:06:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:32.237 21:06:54 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:32.237 21:06:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:32.237 00:10:32.237 real 0m3.181s 00:10:32.237 user 0m2.725s 00:10:32.237 sys 0m0.320s 00:10:32.237 21:06:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.237 21:06:54 -- common/autotest_common.sh@10 -- # set +x 00:10:32.237 21:06:54 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:32.237 21:06:54 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:32.237 21:06:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:32.237 21:06:54 -- common/autotest_common.sh@10 -- # set +x 00:10:32.237 ************************************ 00:10:32.237 START TEST accel_dif_verify 00:10:32.237 ************************************ 00:10:32.237 21:06:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:10:32.237 21:06:54 -- accel/accel.sh@16 -- # local accel_opc 00:10:32.237 21:06:54 -- accel/accel.sh@17 -- # local accel_module 00:10:32.237 21:06:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:10:32.237 21:06:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:32.237 21:06:54 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.237 21:06:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.237 21:06:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.237 21:06:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.237 21:06:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.237 21:06:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.237 21:06:54 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.237 21:06:54 -- accel/accel.sh@42 -- # jq -r . 00:10:32.237 [2024-06-07 21:06:54.719378] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:32.237 [2024-06-07 21:06:54.719866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120633 ] 00:10:32.237 [2024-06-07 21:06:54.882598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.496 [2024-06-07 21:06:55.009274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.872 21:06:56 -- accel/accel.sh@18 -- # out=' 00:10:33.872 SPDK Configuration: 00:10:33.872 Core mask: 0x1 00:10:33.872 00:10:33.872 Accel Perf Configuration: 00:10:33.872 Workload Type: dif_verify 00:10:33.872 Vector size: 4096 bytes 00:10:33.872 Transfer size: 4096 bytes 00:10:33.872 Block size: 512 bytes 00:10:33.872 Metadata size: 8 bytes 00:10:33.872 Vector count 1 00:10:33.872 Module: software 00:10:33.872 Queue depth: 32 00:10:33.872 Allocate depth: 32 00:10:33.872 # threads/core: 1 00:10:33.872 Run time: 1 seconds 00:10:33.872 Verify: No 00:10:33.872 00:10:33.872 Running for 1 seconds... 00:10:33.872 00:10:33.872 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:33.872 ------------------------------------------------------------------------------------ 00:10:33.872 0,0 85344/s 338 MiB/s 0 0 00:10:33.873 ==================================================================================== 00:10:33.873 Total 85344/s 333 MiB/s 0 0' 00:10:33.873 21:06:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:33.873 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:33.873 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:33.873 21:06:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:33.873 21:06:56 -- accel/accel.sh@12 -- # build_accel_config 00:10:33.873 21:06:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:33.873 21:06:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:33.873 21:06:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:33.873 21:06:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:33.873 21:06:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:33.873 21:06:56 -- accel/accel.sh@41 -- # local IFS=, 00:10:33.873 21:06:56 -- accel/accel.sh@42 -- # jq -r . 00:10:33.873 [2024-06-07 21:06:56.353296] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:33.873 [2024-06-07 21:06:56.354024] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120667 ] 00:10:33.873 [2024-06-07 21:06:56.529330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.131 [2024-06-07 21:06:56.652070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.131 21:06:56 -- accel/accel.sh@21 -- # val= 00:10:34.131 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.131 21:06:56 -- accel/accel.sh@21 -- # val= 00:10:34.131 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.131 21:06:56 -- accel/accel.sh@21 -- # val=0x1 00:10:34.131 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.131 21:06:56 -- accel/accel.sh@21 -- # val= 00:10:34.131 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.131 21:06:56 -- accel/accel.sh@21 -- # val= 00:10:34.131 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.131 21:06:56 -- accel/accel.sh@21 -- # val=dif_verify 00:10:34.131 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.131 21:06:56 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.131 21:06:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:34.131 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.131 21:06:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:34.131 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.131 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.131 21:06:56 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:34.131 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.132 21:06:56 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:34.132 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.132 21:06:56 -- accel/accel.sh@21 -- # val= 00:10:34.132 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.132 21:06:56 -- accel/accel.sh@21 -- # val=software 00:10:34.132 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.132 21:06:56 -- accel/accel.sh@23 -- # accel_module=software 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.132 21:06:56 -- accel/accel.sh@21 -- # val=32 00:10:34.132 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.132 21:06:56 -- accel/accel.sh@21 -- # val=32 00:10:34.132 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.132 21:06:56 -- accel/accel.sh@21 -- # val=1 00:10:34.132 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.132 21:06:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:34.132 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.132 21:06:56 -- accel/accel.sh@21 -- # val=No 00:10:34.132 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.132 21:06:56 -- accel/accel.sh@21 -- # val= 00:10:34.132 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:34.132 21:06:56 -- accel/accel.sh@21 -- # val= 00:10:34.132 21:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # IFS=: 00:10:34.132 21:06:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.510 21:06:57 -- accel/accel.sh@21 -- # val= 00:10:35.510 21:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.510 21:06:57 -- accel/accel.sh@20 -- # IFS=: 00:10:35.510 21:06:57 -- accel/accel.sh@20 -- # read -r var val 00:10:35.510 21:06:57 -- accel/accel.sh@21 -- # val= 00:10:35.510 21:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.510 21:06:57 -- accel/accel.sh@20 -- # IFS=: 00:10:35.510 21:06:57 -- accel/accel.sh@20 -- # read -r var val 00:10:35.510 21:06:57 -- accel/accel.sh@21 -- # val= 00:10:35.510 21:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.510 21:06:57 -- accel/accel.sh@20 -- # IFS=: 00:10:35.510 21:06:57 -- accel/accel.sh@20 -- # read -r var val 00:10:35.510 21:06:57 -- accel/accel.sh@21 -- # val= 00:10:35.510 21:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.510 21:06:57 -- accel/accel.sh@20 -- # IFS=: 00:10:35.510 21:06:57 -- accel/accel.sh@20 -- # read -r var val 00:10:35.510 21:06:57 -- accel/accel.sh@21 -- # val= 00:10:35.510 21:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.510 21:06:57 -- accel/accel.sh@20 -- # IFS=: 00:10:35.510 21:06:57 -- accel/accel.sh@20 -- # read -r var val 00:10:35.510 21:06:57 -- accel/accel.sh@21 -- # val= 00:10:35.510 21:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.510 21:06:57 -- accel/accel.sh@20 -- # IFS=: 00:10:35.510 21:06:57 -- accel/accel.sh@20 -- # read -r var val 00:10:35.510 21:06:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:35.510 21:06:57 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:10:35.510 21:06:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:35.510 00:10:35.510 real 0m3.272s 00:10:35.510 user 0m2.755s 00:10:35.510 sys 0m0.377s 00:10:35.510 21:06:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.510 21:06:57 -- common/autotest_common.sh@10 -- # set +x 00:10:35.511 ************************************ 00:10:35.511 END TEST accel_dif_verify 00:10:35.511 ************************************ 00:10:35.511 21:06:57 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:35.511 21:06:57 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:35.511 21:06:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:35.511 21:06:57 -- common/autotest_common.sh@10 -- # set +x 00:10:35.511 ************************************ 00:10:35.511 START TEST accel_dif_generate 00:10:35.511 ************************************ 00:10:35.511 21:06:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:10:35.511 21:06:58 -- accel/accel.sh@16 -- # local accel_opc 00:10:35.511 21:06:58 -- accel/accel.sh@17 -- # local accel_module 00:10:35.511 21:06:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:10:35.511 21:06:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:35.511 21:06:58 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.511 21:06:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.511 21:06:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.511 21:06:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.511 21:06:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.511 21:06:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.511 21:06:58 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.511 21:06:58 -- accel/accel.sh@42 -- # jq -r . 00:10:35.511 [2024-06-07 21:06:58.046687] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:35.511 [2024-06-07 21:06:58.047198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120702 ] 00:10:35.770 [2024-06-07 21:06:58.221015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.770 [2024-06-07 21:06:58.333366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.146 21:06:59 -- accel/accel.sh@18 -- # out=' 00:10:37.146 SPDK Configuration: 00:10:37.146 Core mask: 0x1 00:10:37.146 00:10:37.146 Accel Perf Configuration: 00:10:37.146 Workload Type: dif_generate 00:10:37.146 Vector size: 4096 bytes 00:10:37.146 Transfer size: 4096 bytes 00:10:37.146 Block size: 512 bytes 00:10:37.146 Metadata size: 8 bytes 00:10:37.146 Vector count 1 00:10:37.146 Module: software 00:10:37.146 Queue depth: 32 00:10:37.146 Allocate depth: 32 00:10:37.146 # threads/core: 1 00:10:37.146 Run time: 1 seconds 00:10:37.146 Verify: No 00:10:37.146 00:10:37.146 Running for 1 seconds... 00:10:37.146 00:10:37.146 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:37.146 ------------------------------------------------------------------------------------ 00:10:37.146 0,0 103552/s 410 MiB/s 0 0 00:10:37.146 ==================================================================================== 00:10:37.146 Total 103552/s 404 MiB/s 0 0' 00:10:37.146 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.146 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.146 21:06:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:37.146 21:06:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:37.146 21:06:59 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.146 21:06:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.146 21:06:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.146 21:06:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.146 21:06:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.146 21:06:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.146 21:06:59 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.146 21:06:59 -- accel/accel.sh@42 -- # jq -r . 00:10:37.146 [2024-06-07 21:06:59.646686] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:37.146 [2024-06-07 21:06:59.647304] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120736 ] 00:10:37.146 [2024-06-07 21:06:59.818331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.405 [2024-06-07 21:06:59.922590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.405 21:06:59 -- accel/accel.sh@21 -- # val= 00:10:37.405 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.405 21:06:59 -- accel/accel.sh@21 -- # val= 00:10:37.405 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.405 21:06:59 -- accel/accel.sh@21 -- # val=0x1 00:10:37.405 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.405 21:06:59 -- accel/accel.sh@21 -- # val= 00:10:37.405 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.405 21:06:59 -- accel/accel.sh@21 -- # val= 00:10:37.405 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.405 21:06:59 -- accel/accel.sh@21 -- # val=dif_generate 00:10:37.405 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.405 21:06:59 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.405 21:06:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:37.405 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.405 21:06:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:37.405 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.405 21:06:59 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:37.405 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.405 21:06:59 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:37.405 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.405 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.405 21:06:59 -- accel/accel.sh@21 -- # val= 00:10:37.405 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.406 21:06:59 -- accel/accel.sh@21 -- # val=software 00:10:37.406 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.406 21:06:59 -- accel/accel.sh@23 -- # accel_module=software 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.406 21:06:59 -- accel/accel.sh@21 -- # val=32 00:10:37.406 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.406 21:06:59 -- accel/accel.sh@21 -- # val=32 00:10:37.406 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.406 21:06:59 -- accel/accel.sh@21 -- # val=1 00:10:37.406 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.406 21:06:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:37.406 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.406 21:06:59 -- accel/accel.sh@21 -- # val=No 00:10:37.406 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.406 21:06:59 -- accel/accel.sh@21 -- # val= 00:10:37.406 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:37.406 21:06:59 -- accel/accel.sh@21 -- # val= 00:10:37.406 21:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # IFS=: 00:10:37.406 21:06:59 -- accel/accel.sh@20 -- # read -r var val 00:10:38.823 21:07:01 -- accel/accel.sh@21 -- # val= 00:10:38.823 21:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.823 21:07:01 -- accel/accel.sh@20 -- # IFS=: 00:10:38.823 21:07:01 -- accel/accel.sh@20 -- # read -r var val 00:10:38.823 21:07:01 -- accel/accel.sh@21 -- # val= 00:10:38.823 21:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.823 21:07:01 -- accel/accel.sh@20 -- # IFS=: 00:10:38.823 21:07:01 -- accel/accel.sh@20 -- # read -r var val 00:10:38.823 21:07:01 -- accel/accel.sh@21 -- # val= 00:10:38.823 21:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.823 21:07:01 -- accel/accel.sh@20 -- # IFS=: 00:10:38.823 21:07:01 -- accel/accel.sh@20 -- # read -r var val 00:10:38.823 21:07:01 -- accel/accel.sh@21 -- # val= 00:10:38.823 21:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.823 21:07:01 -- accel/accel.sh@20 -- # IFS=: 00:10:38.823 21:07:01 -- accel/accel.sh@20 -- # read -r var val 00:10:38.823 21:07:01 -- accel/accel.sh@21 -- # val= 00:10:38.823 21:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.823 21:07:01 -- accel/accel.sh@20 -- # IFS=: 00:10:38.823 21:07:01 -- accel/accel.sh@20 -- # read -r var val 00:10:38.823 21:07:01 -- accel/accel.sh@21 -- # val= 00:10:38.823 21:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.823 21:07:01 -- accel/accel.sh@20 -- # IFS=: 00:10:38.823 21:07:01 -- accel/accel.sh@20 -- # read -r var val 00:10:38.823 ************************************ 00:10:38.823 END TEST accel_dif_generate 00:10:38.823 ************************************ 00:10:38.823 21:07:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:38.823 21:07:01 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:38.823 21:07:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:38.823 00:10:38.823 real 0m3.181s 00:10:38.823 user 0m2.700s 00:10:38.823 sys 0m0.342s 00:10:38.823 21:07:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.823 21:07:01 -- common/autotest_common.sh@10 -- # set +x 00:10:38.823 21:07:01 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:38.823 21:07:01 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:38.823 21:07:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:38.823 21:07:01 -- common/autotest_common.sh@10 -- # set +x 00:10:38.823 ************************************ 00:10:38.823 START TEST accel_dif_generate_copy 00:10:38.823 ************************************ 00:10:38.823 21:07:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:10:38.823 21:07:01 -- accel/accel.sh@16 -- # local accel_opc 00:10:38.823 21:07:01 -- accel/accel.sh@17 -- # local accel_module 00:10:38.823 21:07:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:38.823 21:07:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:38.823 21:07:01 -- accel/accel.sh@12 -- # build_accel_config 00:10:38.823 21:07:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:38.823 21:07:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:38.823 21:07:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:38.823 21:07:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:38.823 21:07:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:38.823 21:07:01 -- accel/accel.sh@41 -- # local IFS=, 00:10:38.823 21:07:01 -- accel/accel.sh@42 -- # jq -r . 00:10:38.823 [2024-06-07 21:07:01.279602] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:38.823 [2024-06-07 21:07:01.279953] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120778 ] 00:10:38.823 [2024-06-07 21:07:01.440316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.081 [2024-06-07 21:07:01.560182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.456 21:07:02 -- accel/accel.sh@18 -- # out=' 00:10:40.456 SPDK Configuration: 00:10:40.456 Core mask: 0x1 00:10:40.456 00:10:40.456 Accel Perf Configuration: 00:10:40.456 Workload Type: dif_generate_copy 00:10:40.456 Vector size: 4096 bytes 00:10:40.456 Transfer size: 4096 bytes 00:10:40.456 Vector count 1 00:10:40.456 Module: software 00:10:40.456 Queue depth: 32 00:10:40.456 Allocate depth: 32 00:10:40.456 # threads/core: 1 00:10:40.456 Run time: 1 seconds 00:10:40.456 Verify: No 00:10:40.456 00:10:40.456 Running for 1 seconds... 00:10:40.456 00:10:40.456 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:40.456 ------------------------------------------------------------------------------------ 00:10:40.456 0,0 75456/s 299 MiB/s 0 0 00:10:40.456 ==================================================================================== 00:10:40.456 Total 75456/s 294 MiB/s 0 0' 00:10:40.456 21:07:02 -- accel/accel.sh@20 -- # IFS=: 00:10:40.456 21:07:02 -- accel/accel.sh@20 -- # read -r var val 00:10:40.456 21:07:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:40.456 21:07:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:40.456 21:07:02 -- accel/accel.sh@12 -- # build_accel_config 00:10:40.456 21:07:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:40.456 21:07:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:40.456 21:07:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:40.456 21:07:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:40.456 21:07:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:40.456 21:07:02 -- accel/accel.sh@41 -- # local IFS=, 00:10:40.456 21:07:02 -- accel/accel.sh@42 -- # jq -r . 00:10:40.456 [2024-06-07 21:07:02.873250] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:40.456 [2024-06-07 21:07:02.873934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120805 ] 00:10:40.456 [2024-06-07 21:07:03.053003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.714 [2024-06-07 21:07:03.189730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val= 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val= 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val=0x1 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val= 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val= 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val= 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val=software 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@23 -- # accel_module=software 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val=32 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val=32 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val=1 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val=No 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val= 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:40.714 21:07:03 -- accel/accel.sh@21 -- # val= 00:10:40.714 21:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # IFS=: 00:10:40.714 21:07:03 -- accel/accel.sh@20 -- # read -r var val 00:10:42.089 21:07:04 -- accel/accel.sh@21 -- # val= 00:10:42.089 21:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.089 21:07:04 -- accel/accel.sh@20 -- # IFS=: 00:10:42.089 21:07:04 -- accel/accel.sh@20 -- # read -r var val 00:10:42.089 21:07:04 -- accel/accel.sh@21 -- # val= 00:10:42.089 21:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.089 21:07:04 -- accel/accel.sh@20 -- # IFS=: 00:10:42.089 21:07:04 -- accel/accel.sh@20 -- # read -r var val 00:10:42.089 21:07:04 -- accel/accel.sh@21 -- # val= 00:10:42.089 21:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.089 21:07:04 -- accel/accel.sh@20 -- # IFS=: 00:10:42.089 21:07:04 -- accel/accel.sh@20 -- # read -r var val 00:10:42.089 21:07:04 -- accel/accel.sh@21 -- # val= 00:10:42.089 21:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.089 21:07:04 -- accel/accel.sh@20 -- # IFS=: 00:10:42.089 21:07:04 -- accel/accel.sh@20 -- # read -r var val 00:10:42.089 21:07:04 -- accel/accel.sh@21 -- # val= 00:10:42.089 21:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.089 21:07:04 -- accel/accel.sh@20 -- # IFS=: 00:10:42.089 21:07:04 -- accel/accel.sh@20 -- # read -r var val 00:10:42.089 21:07:04 -- accel/accel.sh@21 -- # val= 00:10:42.089 21:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.089 21:07:04 -- accel/accel.sh@20 -- # IFS=: 00:10:42.089 21:07:04 -- accel/accel.sh@20 -- # read -r var val 00:10:42.089 ************************************ 00:10:42.089 END TEST accel_dif_generate_copy 00:10:42.089 ************************************ 00:10:42.089 21:07:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:42.089 21:07:04 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:42.089 21:07:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:42.089 00:10:42.089 real 0m3.267s 00:10:42.089 user 0m2.768s 00:10:42.089 sys 0m0.355s 00:10:42.089 21:07:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.089 21:07:04 -- common/autotest_common.sh@10 -- # set +x 00:10:42.089 21:07:04 -- accel/accel.sh@107 -- # [[ y == y ]] 00:10:42.089 21:07:04 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:42.089 21:07:04 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:42.089 21:07:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:42.089 21:07:04 -- common/autotest_common.sh@10 -- # set +x 00:10:42.089 ************************************ 00:10:42.089 START TEST accel_comp 00:10:42.089 ************************************ 00:10:42.089 21:07:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:42.089 21:07:04 -- accel/accel.sh@16 -- # local accel_opc 00:10:42.089 21:07:04 -- accel/accel.sh@17 -- # local accel_module 00:10:42.089 21:07:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:42.089 21:07:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:42.089 21:07:04 -- accel/accel.sh@12 -- # build_accel_config 00:10:42.089 21:07:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:42.089 21:07:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.089 21:07:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.089 21:07:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:42.089 21:07:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:42.089 21:07:04 -- accel/accel.sh@41 -- # local IFS=, 00:10:42.089 21:07:04 -- accel/accel.sh@42 -- # jq -r . 00:10:42.089 [2024-06-07 21:07:04.603024] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:42.089 [2024-06-07 21:07:04.603332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120866 ] 00:10:42.348 [2024-06-07 21:07:04.769403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.348 [2024-06-07 21:07:04.892535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.723 21:07:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:43.723 00:10:43.723 SPDK Configuration: 00:10:43.723 Core mask: 0x1 00:10:43.723 00:10:43.723 Accel Perf Configuration: 00:10:43.723 Workload Type: compress 00:10:43.723 Transfer size: 4096 bytes 00:10:43.723 Vector count 1 00:10:43.723 Module: software 00:10:43.723 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:43.723 Queue depth: 32 00:10:43.723 Allocate depth: 32 00:10:43.723 # threads/core: 1 00:10:43.723 Run time: 1 seconds 00:10:43.723 Verify: No 00:10:43.723 00:10:43.723 Running for 1 seconds... 00:10:43.723 00:10:43.724 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:43.724 ------------------------------------------------------------------------------------ 00:10:43.724 0,0 43456/s 181 MiB/s 0 0 00:10:43.724 ==================================================================================== 00:10:43.724 Total 43456/s 169 MiB/s 0 0' 00:10:43.724 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.724 21:07:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:43.724 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.724 21:07:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:43.724 21:07:06 -- accel/accel.sh@12 -- # build_accel_config 00:10:43.724 21:07:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:43.724 21:07:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:43.724 21:07:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:43.724 21:07:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:43.724 21:07:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:43.724 21:07:06 -- accel/accel.sh@41 -- # local IFS=, 00:10:43.724 21:07:06 -- accel/accel.sh@42 -- # jq -r . 00:10:43.724 [2024-06-07 21:07:06.231967] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:43.724 [2024-06-07 21:07:06.232321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120893 ] 00:10:43.982 [2024-06-07 21:07:06.408193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.982 [2024-06-07 21:07:06.529639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.982 21:07:06 -- accel/accel.sh@21 -- # val= 00:10:43.982 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.982 21:07:06 -- accel/accel.sh@21 -- # val= 00:10:43.982 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.982 21:07:06 -- accel/accel.sh@21 -- # val= 00:10:43.982 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.982 21:07:06 -- accel/accel.sh@21 -- # val=0x1 00:10:43.982 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.982 21:07:06 -- accel/accel.sh@21 -- # val= 00:10:43.982 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.982 21:07:06 -- accel/accel.sh@21 -- # val= 00:10:43.982 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.982 21:07:06 -- accel/accel.sh@21 -- # val=compress 00:10:43.982 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.982 21:07:06 -- accel/accel.sh@24 -- # accel_opc=compress 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.982 21:07:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:43.982 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.982 21:07:06 -- accel/accel.sh@21 -- # val= 00:10:43.982 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.982 21:07:06 -- accel/accel.sh@21 -- # val=software 00:10:43.982 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.982 21:07:06 -- accel/accel.sh@23 -- # accel_module=software 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.982 21:07:06 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:43.982 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.982 21:07:06 -- accel/accel.sh@21 -- # val=32 00:10:43.982 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.982 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.982 21:07:06 -- accel/accel.sh@21 -- # val=32 00:10:43.983 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.983 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.983 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.983 21:07:06 -- accel/accel.sh@21 -- # val=1 00:10:43.983 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.983 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.983 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.983 21:07:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:43.983 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.983 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.983 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.983 21:07:06 -- accel/accel.sh@21 -- # val=No 00:10:43.983 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.983 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.983 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.983 21:07:06 -- accel/accel.sh@21 -- # val= 00:10:43.983 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.983 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.983 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:43.983 21:07:06 -- accel/accel.sh@21 -- # val= 00:10:43.983 21:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.983 21:07:06 -- accel/accel.sh@20 -- # IFS=: 00:10:43.983 21:07:06 -- accel/accel.sh@20 -- # read -r var val 00:10:45.359 21:07:07 -- accel/accel.sh@21 -- # val= 00:10:45.359 21:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.359 21:07:07 -- accel/accel.sh@20 -- # IFS=: 00:10:45.359 21:07:07 -- accel/accel.sh@20 -- # read -r var val 00:10:45.359 21:07:07 -- accel/accel.sh@21 -- # val= 00:10:45.359 21:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.359 21:07:07 -- accel/accel.sh@20 -- # IFS=: 00:10:45.359 21:07:07 -- accel/accel.sh@20 -- # read -r var val 00:10:45.359 21:07:07 -- accel/accel.sh@21 -- # val= 00:10:45.359 21:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.359 21:07:07 -- accel/accel.sh@20 -- # IFS=: 00:10:45.359 21:07:07 -- accel/accel.sh@20 -- # read -r var val 00:10:45.359 21:07:07 -- accel/accel.sh@21 -- # val= 00:10:45.359 21:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.359 21:07:07 -- accel/accel.sh@20 -- # IFS=: 00:10:45.359 21:07:07 -- accel/accel.sh@20 -- # read -r var val 00:10:45.359 21:07:07 -- accel/accel.sh@21 -- # val= 00:10:45.359 21:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.359 21:07:07 -- accel/accel.sh@20 -- # IFS=: 00:10:45.359 21:07:07 -- accel/accel.sh@20 -- # read -r var val 00:10:45.359 21:07:07 -- accel/accel.sh@21 -- # val= 00:10:45.359 21:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.359 21:07:07 -- accel/accel.sh@20 -- # IFS=: 00:10:45.359 21:07:07 -- accel/accel.sh@20 -- # read -r var val 00:10:45.359 21:07:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:45.359 21:07:07 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:10:45.359 21:07:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:45.359 00:10:45.359 real 0m3.256s 00:10:45.359 user 0m2.761s 00:10:45.359 sys 0m0.370s 00:10:45.359 21:07:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:45.359 21:07:07 -- common/autotest_common.sh@10 -- # set +x 00:10:45.359 ************************************ 00:10:45.359 END TEST accel_comp 00:10:45.359 ************************************ 00:10:45.359 21:07:07 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:45.359 21:07:07 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:45.359 21:07:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:45.359 21:07:07 -- common/autotest_common.sh@10 -- # set +x 00:10:45.359 ************************************ 00:10:45.359 START TEST accel_decomp 00:10:45.359 ************************************ 00:10:45.359 21:07:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:45.359 21:07:07 -- accel/accel.sh@16 -- # local accel_opc 00:10:45.359 21:07:07 -- accel/accel.sh@17 -- # local accel_module 00:10:45.359 21:07:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:45.359 21:07:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:45.359 21:07:07 -- accel/accel.sh@12 -- # build_accel_config 00:10:45.359 21:07:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:45.359 21:07:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.359 21:07:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.359 21:07:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:45.359 21:07:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:45.359 21:07:07 -- accel/accel.sh@41 -- # local IFS=, 00:10:45.359 21:07:07 -- accel/accel.sh@42 -- # jq -r . 00:10:45.359 [2024-06-07 21:07:07.915738] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:45.359 [2024-06-07 21:07:07.916211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120935 ] 00:10:45.618 [2024-06-07 21:07:08.088722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.618 [2024-06-07 21:07:08.219152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.995 21:07:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:46.995 00:10:46.995 SPDK Configuration: 00:10:46.995 Core mask: 0x1 00:10:46.995 00:10:46.995 Accel Perf Configuration: 00:10:46.995 Workload Type: decompress 00:10:46.995 Transfer size: 4096 bytes 00:10:46.995 Vector count 1 00:10:46.995 Module: software 00:10:46.995 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:46.995 Queue depth: 32 00:10:46.995 Allocate depth: 32 00:10:46.995 # threads/core: 1 00:10:46.995 Run time: 1 seconds 00:10:46.995 Verify: Yes 00:10:46.995 00:10:46.995 Running for 1 seconds... 00:10:46.995 00:10:46.995 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:46.995 ------------------------------------------------------------------------------------ 00:10:46.995 0,0 50368/s 92 MiB/s 0 0 00:10:46.995 ==================================================================================== 00:10:46.995 Total 50368/s 196 MiB/s 0 0' 00:10:46.995 21:07:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:46.995 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:46.995 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:46.995 21:07:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:46.995 21:07:09 -- accel/accel.sh@12 -- # build_accel_config 00:10:46.995 21:07:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:46.995 21:07:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.995 21:07:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.995 21:07:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:46.995 21:07:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:46.995 21:07:09 -- accel/accel.sh@41 -- # local IFS=, 00:10:46.995 21:07:09 -- accel/accel.sh@42 -- # jq -r . 00:10:46.995 [2024-06-07 21:07:09.561414] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:46.995 [2024-06-07 21:07:09.561738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120969 ] 00:10:47.255 [2024-06-07 21:07:09.747336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.255 [2024-06-07 21:07:09.837092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val= 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val= 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val= 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val=0x1 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val= 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val= 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val=decompress 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val= 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val=software 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@23 -- # accel_module=software 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val=32 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val=32 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val=1 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val=Yes 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val= 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:47.255 21:07:09 -- accel/accel.sh@21 -- # val= 00:10:47.255 21:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # IFS=: 00:10:47.255 21:07:09 -- accel/accel.sh@20 -- # read -r var val 00:10:48.632 21:07:11 -- accel/accel.sh@21 -- # val= 00:10:48.632 21:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.632 21:07:11 -- accel/accel.sh@20 -- # IFS=: 00:10:48.632 21:07:11 -- accel/accel.sh@20 -- # read -r var val 00:10:48.632 21:07:11 -- accel/accel.sh@21 -- # val= 00:10:48.632 21:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.632 21:07:11 -- accel/accel.sh@20 -- # IFS=: 00:10:48.632 21:07:11 -- accel/accel.sh@20 -- # read -r var val 00:10:48.632 21:07:11 -- accel/accel.sh@21 -- # val= 00:10:48.632 21:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.632 21:07:11 -- accel/accel.sh@20 -- # IFS=: 00:10:48.632 21:07:11 -- accel/accel.sh@20 -- # read -r var val 00:10:48.632 21:07:11 -- accel/accel.sh@21 -- # val= 00:10:48.632 21:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.632 21:07:11 -- accel/accel.sh@20 -- # IFS=: 00:10:48.632 21:07:11 -- accel/accel.sh@20 -- # read -r var val 00:10:48.632 21:07:11 -- accel/accel.sh@21 -- # val= 00:10:48.632 21:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.632 21:07:11 -- accel/accel.sh@20 -- # IFS=: 00:10:48.632 21:07:11 -- accel/accel.sh@20 -- # read -r var val 00:10:48.632 21:07:11 -- accel/accel.sh@21 -- # val= 00:10:48.632 21:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.632 21:07:11 -- accel/accel.sh@20 -- # IFS=: 00:10:48.632 21:07:11 -- accel/accel.sh@20 -- # read -r var val 00:10:48.632 21:07:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:48.632 21:07:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:48.632 21:07:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:48.632 00:10:48.632 real 0m3.263s 00:10:48.632 user 0m2.783s 00:10:48.632 sys 0m0.358s 00:10:48.632 21:07:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.632 21:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:48.632 ************************************ 00:10:48.632 END TEST accel_decomp 00:10:48.632 ************************************ 00:10:48.632 21:07:11 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:48.632 21:07:11 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:48.632 21:07:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:48.632 21:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:48.632 ************************************ 00:10:48.632 START TEST accel_decmop_full 00:10:48.632 ************************************ 00:10:48.632 21:07:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:48.632 21:07:11 -- accel/accel.sh@16 -- # local accel_opc 00:10:48.632 21:07:11 -- accel/accel.sh@17 -- # local accel_module 00:10:48.632 21:07:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:48.632 21:07:11 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.632 21:07:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:48.632 21:07:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:48.632 21:07:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.632 21:07:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.632 21:07:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:48.632 21:07:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:48.632 21:07:11 -- accel/accel.sh@41 -- # local IFS=, 00:10:48.632 21:07:11 -- accel/accel.sh@42 -- # jq -r . 00:10:48.632 [2024-06-07 21:07:11.239674] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:48.632 [2024-06-07 21:07:11.239983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121004 ] 00:10:48.891 [2024-06-07 21:07:11.412341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.891 [2024-06-07 21:07:11.513076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.269 21:07:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:50.269 00:10:50.269 SPDK Configuration: 00:10:50.269 Core mask: 0x1 00:10:50.269 00:10:50.269 Accel Perf Configuration: 00:10:50.269 Workload Type: decompress 00:10:50.269 Transfer size: 111250 bytes 00:10:50.269 Vector count 1 00:10:50.269 Module: software 00:10:50.269 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:50.269 Queue depth: 32 00:10:50.269 Allocate depth: 32 00:10:50.269 # threads/core: 1 00:10:50.269 Run time: 1 seconds 00:10:50.269 Verify: Yes 00:10:50.269 00:10:50.269 Running for 1 seconds... 00:10:50.269 00:10:50.269 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:50.269 ------------------------------------------------------------------------------------ 00:10:50.269 0,0 4096/s 169 MiB/s 0 0 00:10:50.269 ==================================================================================== 00:10:50.269 Total 4096/s 434 MiB/s 0 0' 00:10:50.269 21:07:12 -- accel/accel.sh@20 -- # IFS=: 00:10:50.269 21:07:12 -- accel/accel.sh@20 -- # read -r var val 00:10:50.269 21:07:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:50.269 21:07:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:50.269 21:07:12 -- accel/accel.sh@12 -- # build_accel_config 00:10:50.269 21:07:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:50.269 21:07:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.269 21:07:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.269 21:07:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:50.269 21:07:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:50.269 21:07:12 -- accel/accel.sh@41 -- # local IFS=, 00:10:50.269 21:07:12 -- accel/accel.sh@42 -- # jq -r . 00:10:50.269 [2024-06-07 21:07:12.861967] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:50.269 [2024-06-07 21:07:12.862200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121038 ] 00:10:50.534 [2024-06-07 21:07:13.035329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.534 [2024-06-07 21:07:13.138602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val= 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val= 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val= 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val=0x1 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val= 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val= 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val=decompress 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val= 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val=software 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@23 -- # accel_module=software 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val=32 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val=32 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val=1 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val=Yes 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val= 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:50.796 21:07:13 -- accel/accel.sh@21 -- # val= 00:10:50.796 21:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # IFS=: 00:10:50.796 21:07:13 -- accel/accel.sh@20 -- # read -r var val 00:10:52.201 21:07:14 -- accel/accel.sh@21 -- # val= 00:10:52.201 21:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.201 21:07:14 -- accel/accel.sh@20 -- # IFS=: 00:10:52.201 21:07:14 -- accel/accel.sh@20 -- # read -r var val 00:10:52.201 21:07:14 -- accel/accel.sh@21 -- # val= 00:10:52.201 21:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.201 21:07:14 -- accel/accel.sh@20 -- # IFS=: 00:10:52.201 21:07:14 -- accel/accel.sh@20 -- # read -r var val 00:10:52.201 21:07:14 -- accel/accel.sh@21 -- # val= 00:10:52.201 21:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.201 21:07:14 -- accel/accel.sh@20 -- # IFS=: 00:10:52.201 21:07:14 -- accel/accel.sh@20 -- # read -r var val 00:10:52.201 21:07:14 -- accel/accel.sh@21 -- # val= 00:10:52.201 21:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.201 21:07:14 -- accel/accel.sh@20 -- # IFS=: 00:10:52.201 21:07:14 -- accel/accel.sh@20 -- # read -r var val 00:10:52.201 21:07:14 -- accel/accel.sh@21 -- # val= 00:10:52.201 21:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.201 21:07:14 -- accel/accel.sh@20 -- # IFS=: 00:10:52.201 21:07:14 -- accel/accel.sh@20 -- # read -r var val 00:10:52.201 21:07:14 -- accel/accel.sh@21 -- # val= 00:10:52.201 21:07:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.201 21:07:14 -- accel/accel.sh@20 -- # IFS=: 00:10:52.201 21:07:14 -- accel/accel.sh@20 -- # read -r var val 00:10:52.201 21:07:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:52.201 21:07:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:52.201 21:07:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:52.201 00:10:52.201 real 0m3.251s 00:10:52.201 user 0m2.759s 00:10:52.201 sys 0m0.364s 00:10:52.201 21:07:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.201 21:07:14 -- common/autotest_common.sh@10 -- # set +x 00:10:52.201 ************************************ 00:10:52.201 END TEST accel_decmop_full 00:10:52.201 ************************************ 00:10:52.201 21:07:14 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:52.201 21:07:14 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:52.201 21:07:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:52.201 21:07:14 -- common/autotest_common.sh@10 -- # set +x 00:10:52.201 ************************************ 00:10:52.201 START TEST accel_decomp_mcore 00:10:52.201 ************************************ 00:10:52.201 21:07:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:52.201 21:07:14 -- accel/accel.sh@16 -- # local accel_opc 00:10:52.201 21:07:14 -- accel/accel.sh@17 -- # local accel_module 00:10:52.201 21:07:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:52.201 21:07:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:52.201 21:07:14 -- accel/accel.sh@12 -- # build_accel_config 00:10:52.201 21:07:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:52.201 21:07:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.201 21:07:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.201 21:07:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:52.201 21:07:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:52.201 21:07:14 -- accel/accel.sh@41 -- # local IFS=, 00:10:52.201 21:07:14 -- accel/accel.sh@42 -- # jq -r . 00:10:52.201 [2024-06-07 21:07:14.538997] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:52.201 [2024-06-07 21:07:14.539271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121102 ] 00:10:52.201 [2024-06-07 21:07:14.727944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.201 [2024-06-07 21:07:14.839733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.201 [2024-06-07 21:07:14.839854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.201 [2024-06-07 21:07:14.840586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.201 [2024-06-07 21:07:14.840541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.577 21:07:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:53.577 00:10:53.577 SPDK Configuration: 00:10:53.577 Core mask: 0xf 00:10:53.577 00:10:53.577 Accel Perf Configuration: 00:10:53.577 Workload Type: decompress 00:10:53.577 Transfer size: 4096 bytes 00:10:53.577 Vector count 1 00:10:53.577 Module: software 00:10:53.577 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:53.577 Queue depth: 32 00:10:53.577 Allocate depth: 32 00:10:53.577 # threads/core: 1 00:10:53.577 Run time: 1 seconds 00:10:53.577 Verify: Yes 00:10:53.577 00:10:53.577 Running for 1 seconds... 00:10:53.577 00:10:53.577 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:53.577 ------------------------------------------------------------------------------------ 00:10:53.577 0,0 46784/s 86 MiB/s 0 0 00:10:53.577 3,0 49504/s 91 MiB/s 0 0 00:10:53.577 2,0 50560/s 93 MiB/s 0 0 00:10:53.577 1,0 49984/s 92 MiB/s 0 0 00:10:53.577 ==================================================================================== 00:10:53.577 Total 196832/s 768 MiB/s 0 0' 00:10:53.577 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:53.577 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:53.577 21:07:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:53.577 21:07:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:53.577 21:07:16 -- accel/accel.sh@12 -- # build_accel_config 00:10:53.577 21:07:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:53.577 21:07:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:53.577 21:07:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:53.577 21:07:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:53.577 21:07:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:53.577 21:07:16 -- accel/accel.sh@41 -- # local IFS=, 00:10:53.577 21:07:16 -- accel/accel.sh@42 -- # jq -r . 00:10:53.577 [2024-06-07 21:07:16.170223] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:53.577 [2024-06-07 21:07:16.170430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121132 ] 00:10:53.835 [2024-06-07 21:07:16.352431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.835 [2024-06-07 21:07:16.463400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.835 [2024-06-07 21:07:16.463539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.835 [2024-06-07 21:07:16.464329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.835 [2024-06-07 21:07:16.464339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val= 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val= 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val= 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val=0xf 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val= 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val= 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val=decompress 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val= 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val=software 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@23 -- # accel_module=software 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val=32 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val=32 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val=1 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val=Yes 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val= 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:54.094 21:07:16 -- accel/accel.sh@21 -- # val= 00:10:54.094 21:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # IFS=: 00:10:54.094 21:07:16 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 21:07:17 -- accel/accel.sh@21 -- # val= 00:10:55.468 21:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # IFS=: 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 21:07:17 -- accel/accel.sh@21 -- # val= 00:10:55.468 21:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # IFS=: 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 21:07:17 -- accel/accel.sh@21 -- # val= 00:10:55.468 21:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # IFS=: 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 21:07:17 -- accel/accel.sh@21 -- # val= 00:10:55.468 21:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # IFS=: 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 21:07:17 -- accel/accel.sh@21 -- # val= 00:10:55.468 21:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # IFS=: 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 21:07:17 -- accel/accel.sh@21 -- # val= 00:10:55.468 21:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # IFS=: 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 21:07:17 -- accel/accel.sh@21 -- # val= 00:10:55.468 21:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # IFS=: 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 21:07:17 -- accel/accel.sh@21 -- # val= 00:10:55.468 21:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # IFS=: 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 21:07:17 -- accel/accel.sh@21 -- # val= 00:10:55.468 21:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # IFS=: 00:10:55.468 21:07:17 -- accel/accel.sh@20 -- # read -r var val 00:10:55.468 21:07:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:55.468 21:07:17 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:55.468 21:07:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:55.468 00:10:55.468 real 0m3.298s 00:10:55.468 user 0m9.876s 00:10:55.468 sys 0m0.394s 00:10:55.468 21:07:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.468 21:07:17 -- common/autotest_common.sh@10 -- # set +x 00:10:55.468 ************************************ 00:10:55.468 END TEST accel_decomp_mcore 00:10:55.468 ************************************ 00:10:55.468 21:07:17 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:55.468 21:07:17 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:55.468 21:07:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:55.468 21:07:17 -- common/autotest_common.sh@10 -- # set +x 00:10:55.468 ************************************ 00:10:55.469 START TEST accel_decomp_full_mcore 00:10:55.469 ************************************ 00:10:55.469 21:07:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:55.469 21:07:17 -- accel/accel.sh@16 -- # local accel_opc 00:10:55.469 21:07:17 -- accel/accel.sh@17 -- # local accel_module 00:10:55.469 21:07:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:55.469 21:07:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:55.469 21:07:17 -- accel/accel.sh@12 -- # build_accel_config 00:10:55.469 21:07:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:55.469 21:07:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.469 21:07:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.469 21:07:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:55.469 21:07:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:55.469 21:07:17 -- accel/accel.sh@41 -- # local IFS=, 00:10:55.469 21:07:17 -- accel/accel.sh@42 -- # jq -r . 00:10:55.469 [2024-06-07 21:07:17.894908] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:55.469 [2024-06-07 21:07:17.895154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121180 ] 00:10:55.469 [2024-06-07 21:07:18.087314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.726 [2024-06-07 21:07:18.192594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.726 [2024-06-07 21:07:18.192765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.726 [2024-06-07 21:07:18.193620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.726 [2024-06-07 21:07:18.193657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.100 21:07:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:57.100 00:10:57.100 SPDK Configuration: 00:10:57.100 Core mask: 0xf 00:10:57.100 00:10:57.100 Accel Perf Configuration: 00:10:57.100 Workload Type: decompress 00:10:57.100 Transfer size: 111250 bytes 00:10:57.100 Vector count 1 00:10:57.100 Module: software 00:10:57.100 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:57.100 Queue depth: 32 00:10:57.100 Allocate depth: 32 00:10:57.100 # threads/core: 1 00:10:57.100 Run time: 1 seconds 00:10:57.100 Verify: Yes 00:10:57.100 00:10:57.100 Running for 1 seconds... 00:10:57.100 00:10:57.100 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:57.100 ------------------------------------------------------------------------------------ 00:10:57.100 0,0 4224/s 174 MiB/s 0 0 00:10:57.100 3,0 4128/s 170 MiB/s 0 0 00:10:57.100 2,0 4224/s 174 MiB/s 0 0 00:10:57.100 1,0 4256/s 175 MiB/s 0 0 00:10:57.100 ==================================================================================== 00:10:57.100 Total 16832/s 1785 MiB/s 0 0' 00:10:57.100 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.100 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.100 21:07:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:57.100 21:07:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:57.100 21:07:19 -- accel/accel.sh@12 -- # build_accel_config 00:10:57.100 21:07:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:57.100 21:07:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:57.100 21:07:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:57.100 21:07:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:57.100 21:07:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:57.100 21:07:19 -- accel/accel.sh@41 -- # local IFS=, 00:10:57.100 21:07:19 -- accel/accel.sh@42 -- # jq -r . 00:10:57.100 [2024-06-07 21:07:19.555615] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:57.100 [2024-06-07 21:07:19.555941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121214 ] 00:10:57.100 [2024-06-07 21:07:19.749156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.359 [2024-06-07 21:07:19.880333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.359 [2024-06-07 21:07:19.880467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.359 [2024-06-07 21:07:19.881333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.359 [2024-06-07 21:07:19.881375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val= 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val= 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val= 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val=0xf 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val= 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val= 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val=decompress 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val= 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val=software 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@23 -- # accel_module=software 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val=32 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val=32 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val=1 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val=Yes 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val= 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:57.359 21:07:19 -- accel/accel.sh@21 -- # val= 00:10:57.359 21:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:57.359 21:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:58.733 21:07:21 -- accel/accel.sh@21 -- # val= 00:10:58.733 21:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.733 21:07:21 -- accel/accel.sh@20 -- # IFS=: 00:10:58.733 21:07:21 -- accel/accel.sh@20 -- # read -r var val 00:10:58.733 21:07:21 -- accel/accel.sh@21 -- # val= 00:10:58.733 21:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.733 21:07:21 -- accel/accel.sh@20 -- # IFS=: 00:10:58.734 21:07:21 -- accel/accel.sh@20 -- # read -r var val 00:10:58.734 21:07:21 -- accel/accel.sh@21 -- # val= 00:10:58.734 21:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.734 21:07:21 -- accel/accel.sh@20 -- # IFS=: 00:10:58.734 21:07:21 -- accel/accel.sh@20 -- # read -r var val 00:10:58.734 21:07:21 -- accel/accel.sh@21 -- # val= 00:10:58.734 21:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.734 21:07:21 -- accel/accel.sh@20 -- # IFS=: 00:10:58.734 21:07:21 -- accel/accel.sh@20 -- # read -r var val 00:10:58.734 21:07:21 -- accel/accel.sh@21 -- # val= 00:10:58.734 21:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.734 21:07:21 -- accel/accel.sh@20 -- # IFS=: 00:10:58.734 21:07:21 -- accel/accel.sh@20 -- # read -r var val 00:10:58.734 21:07:21 -- accel/accel.sh@21 -- # val= 00:10:58.734 21:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.734 21:07:21 -- accel/accel.sh@20 -- # IFS=: 00:10:58.734 21:07:21 -- accel/accel.sh@20 -- # read -r var val 00:10:58.734 21:07:21 -- accel/accel.sh@21 -- # val= 00:10:58.734 21:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.734 21:07:21 -- accel/accel.sh@20 -- # IFS=: 00:10:58.734 21:07:21 -- accel/accel.sh@20 -- # read -r var val 00:10:58.734 21:07:21 -- accel/accel.sh@21 -- # val= 00:10:58.734 21:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.734 21:07:21 -- accel/accel.sh@20 -- # IFS=: 00:10:58.734 21:07:21 -- accel/accel.sh@20 -- # read -r var val 00:10:58.734 21:07:21 -- accel/accel.sh@21 -- # val= 00:10:58.734 21:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.734 21:07:21 -- accel/accel.sh@20 -- # IFS=: 00:10:58.734 21:07:21 -- accel/accel.sh@20 -- # read -r var val 00:10:58.734 21:07:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:58.734 21:07:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:58.734 21:07:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:58.734 00:10:58.734 real 0m3.330s 00:10:58.734 user 0m9.859s 00:10:58.734 sys 0m0.481s 00:10:58.734 ************************************ 00:10:58.734 END TEST accel_decomp_full_mcore 00:10:58.734 ************************************ 00:10:58.734 21:07:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.734 21:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:58.734 21:07:21 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:58.734 21:07:21 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:58.734 21:07:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:58.734 21:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:58.734 ************************************ 00:10:58.734 START TEST accel_decomp_mthread 00:10:58.734 ************************************ 00:10:58.734 21:07:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:58.734 21:07:21 -- accel/accel.sh@16 -- # local accel_opc 00:10:58.734 21:07:21 -- accel/accel.sh@17 -- # local accel_module 00:10:58.734 21:07:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:58.734 21:07:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:58.734 21:07:21 -- accel/accel.sh@12 -- # build_accel_config 00:10:58.734 21:07:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:58.734 21:07:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:58.734 21:07:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.734 21:07:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:58.734 21:07:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:58.734 21:07:21 -- accel/accel.sh@41 -- # local IFS=, 00:10:58.734 21:07:21 -- accel/accel.sh@42 -- # jq -r . 00:10:58.734 [2024-06-07 21:07:21.276021] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:58.734 [2024-06-07 21:07:21.276285] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121258 ] 00:10:58.992 [2024-06-07 21:07:21.451341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.992 [2024-06-07 21:07:21.562445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.401 21:07:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:00.401 00:11:00.401 SPDK Configuration: 00:11:00.401 Core mask: 0x1 00:11:00.401 00:11:00.401 Accel Perf Configuration: 00:11:00.401 Workload Type: decompress 00:11:00.401 Transfer size: 4096 bytes 00:11:00.401 Vector count 1 00:11:00.401 Module: software 00:11:00.401 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:00.401 Queue depth: 32 00:11:00.401 Allocate depth: 32 00:11:00.401 # threads/core: 2 00:11:00.401 Run time: 1 seconds 00:11:00.401 Verify: Yes 00:11:00.401 00:11:00.401 Running for 1 seconds... 00:11:00.401 00:11:00.401 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:00.402 ------------------------------------------------------------------------------------ 00:11:00.402 0,1 27840/s 51 MiB/s 0 0 00:11:00.402 0,0 27680/s 51 MiB/s 0 0 00:11:00.402 ==================================================================================== 00:11:00.402 Total 55520/s 216 MiB/s 0 0' 00:11:00.402 21:07:22 -- accel/accel.sh@20 -- # IFS=: 00:11:00.402 21:07:22 -- accel/accel.sh@20 -- # read -r var val 00:11:00.402 21:07:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:00.402 21:07:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:00.402 21:07:22 -- accel/accel.sh@12 -- # build_accel_config 00:11:00.402 21:07:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:00.402 21:07:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:00.402 21:07:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:00.402 21:07:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:00.402 21:07:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:00.402 21:07:22 -- accel/accel.sh@41 -- # local IFS=, 00:11:00.402 21:07:22 -- accel/accel.sh@42 -- # jq -r . 00:11:00.402 [2024-06-07 21:07:22.892730] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:00.402 [2024-06-07 21:07:22.892996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121286 ] 00:11:00.662 [2024-06-07 21:07:23.062873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.662 [2024-06-07 21:07:23.169213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val= 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val= 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val= 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val=0x1 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val= 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val= 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val=decompress 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val= 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val=software 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@23 -- # accel_module=software 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val=32 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val=32 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val=2 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val=Yes 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val= 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:00.662 21:07:23 -- accel/accel.sh@21 -- # val= 00:11:00.662 21:07:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # IFS=: 00:11:00.662 21:07:23 -- accel/accel.sh@20 -- # read -r var val 00:11:02.035 21:07:24 -- accel/accel.sh@21 -- # val= 00:11:02.035 21:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.035 21:07:24 -- accel/accel.sh@20 -- # IFS=: 00:11:02.035 21:07:24 -- accel/accel.sh@20 -- # read -r var val 00:11:02.035 21:07:24 -- accel/accel.sh@21 -- # val= 00:11:02.035 21:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.035 21:07:24 -- accel/accel.sh@20 -- # IFS=: 00:11:02.035 21:07:24 -- accel/accel.sh@20 -- # read -r var val 00:11:02.035 21:07:24 -- accel/accel.sh@21 -- # val= 00:11:02.035 21:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.035 21:07:24 -- accel/accel.sh@20 -- # IFS=: 00:11:02.035 21:07:24 -- accel/accel.sh@20 -- # read -r var val 00:11:02.035 21:07:24 -- accel/accel.sh@21 -- # val= 00:11:02.035 21:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.035 21:07:24 -- accel/accel.sh@20 -- # IFS=: 00:11:02.035 21:07:24 -- accel/accel.sh@20 -- # read -r var val 00:11:02.035 21:07:24 -- accel/accel.sh@21 -- # val= 00:11:02.035 21:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.035 21:07:24 -- accel/accel.sh@20 -- # IFS=: 00:11:02.035 21:07:24 -- accel/accel.sh@20 -- # read -r var val 00:11:02.035 21:07:24 -- accel/accel.sh@21 -- # val= 00:11:02.035 21:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.035 21:07:24 -- accel/accel.sh@20 -- # IFS=: 00:11:02.035 21:07:24 -- accel/accel.sh@20 -- # read -r var val 00:11:02.035 21:07:24 -- accel/accel.sh@21 -- # val= 00:11:02.035 21:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.035 21:07:24 -- accel/accel.sh@20 -- # IFS=: 00:11:02.035 21:07:24 -- accel/accel.sh@20 -- # read -r var val 00:11:02.035 21:07:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:02.035 21:07:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:02.035 21:07:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:02.035 ************************************ 00:11:02.035 00:11:02.035 real 0m3.215s 00:11:02.035 user 0m2.705s 00:11:02.035 sys 0m0.351s 00:11:02.035 21:07:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.035 21:07:24 -- common/autotest_common.sh@10 -- # set +x 00:11:02.035 END TEST accel_decomp_mthread 00:11:02.035 ************************************ 00:11:02.035 21:07:24 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:02.035 21:07:24 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:02.035 21:07:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:02.035 21:07:24 -- common/autotest_common.sh@10 -- # set +x 00:11:02.035 ************************************ 00:11:02.035 START TEST accel_deomp_full_mthread 00:11:02.035 ************************************ 00:11:02.035 21:07:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:02.035 21:07:24 -- accel/accel.sh@16 -- # local accel_opc 00:11:02.035 21:07:24 -- accel/accel.sh@17 -- # local accel_module 00:11:02.035 21:07:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:02.035 21:07:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:02.035 21:07:24 -- accel/accel.sh@12 -- # build_accel_config 00:11:02.035 21:07:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:02.035 21:07:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:02.035 21:07:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:02.035 21:07:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:02.035 21:07:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:02.035 21:07:24 -- accel/accel.sh@41 -- # local IFS=, 00:11:02.035 21:07:24 -- accel/accel.sh@42 -- # jq -r . 00:11:02.035 [2024-06-07 21:07:24.537064] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:02.035 [2024-06-07 21:07:24.537838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121345 ] 00:11:02.035 [2024-06-07 21:07:24.706243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.294 [2024-06-07 21:07:24.816146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.670 21:07:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:03.670 00:11:03.670 SPDK Configuration: 00:11:03.670 Core mask: 0x1 00:11:03.670 00:11:03.670 Accel Perf Configuration: 00:11:03.670 Workload Type: decompress 00:11:03.670 Transfer size: 111250 bytes 00:11:03.670 Vector count 1 00:11:03.670 Module: software 00:11:03.670 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:03.670 Queue depth: 32 00:11:03.670 Allocate depth: 32 00:11:03.670 # threads/core: 2 00:11:03.670 Run time: 1 seconds 00:11:03.670 Verify: Yes 00:11:03.670 00:11:03.670 Running for 1 seconds... 00:11:03.670 00:11:03.670 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:03.670 ------------------------------------------------------------------------------------ 00:11:03.670 0,1 2112/s 87 MiB/s 0 0 00:11:03.670 0,0 2080/s 85 MiB/s 0 0 00:11:03.670 ==================================================================================== 00:11:03.670 Total 4192/s 444 MiB/s 0 0' 00:11:03.670 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.670 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.670 21:07:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:03.670 21:07:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:03.670 21:07:26 -- accel/accel.sh@12 -- # build_accel_config 00:11:03.670 21:07:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:03.670 21:07:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.670 21:07:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.670 21:07:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:03.670 21:07:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:03.670 21:07:26 -- accel/accel.sh@41 -- # local IFS=, 00:11:03.670 21:07:26 -- accel/accel.sh@42 -- # jq -r . 00:11:03.670 [2024-06-07 21:07:26.151587] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:03.670 [2024-06-07 21:07:26.152376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121373 ] 00:11:03.670 [2024-06-07 21:07:26.321934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.929 [2024-06-07 21:07:26.422731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.929 21:07:26 -- accel/accel.sh@21 -- # val= 00:11:03.929 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.929 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.929 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.929 21:07:26 -- accel/accel.sh@21 -- # val= 00:11:03.929 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.929 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.929 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.929 21:07:26 -- accel/accel.sh@21 -- # val= 00:11:03.929 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.929 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.929 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.929 21:07:26 -- accel/accel.sh@21 -- # val=0x1 00:11:03.929 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.929 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.929 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.929 21:07:26 -- accel/accel.sh@21 -- # val= 00:11:03.929 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.929 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.929 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.929 21:07:26 -- accel/accel.sh@21 -- # val= 00:11:03.929 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.929 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.929 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.929 21:07:26 -- accel/accel.sh@21 -- # val=decompress 00:11:03.929 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.929 21:07:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:03.929 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.929 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.929 21:07:26 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:03.929 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.930 21:07:26 -- accel/accel.sh@21 -- # val= 00:11:03.930 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.930 21:07:26 -- accel/accel.sh@21 -- # val=software 00:11:03.930 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.930 21:07:26 -- accel/accel.sh@23 -- # accel_module=software 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.930 21:07:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:03.930 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.930 21:07:26 -- accel/accel.sh@21 -- # val=32 00:11:03.930 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.930 21:07:26 -- accel/accel.sh@21 -- # val=32 00:11:03.930 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.930 21:07:26 -- accel/accel.sh@21 -- # val=2 00:11:03.930 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.930 21:07:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:03.930 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.930 21:07:26 -- accel/accel.sh@21 -- # val=Yes 00:11:03.930 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.930 21:07:26 -- accel/accel.sh@21 -- # val= 00:11:03.930 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:03.930 21:07:26 -- accel/accel.sh@21 -- # val= 00:11:03.930 21:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # IFS=: 00:11:03.930 21:07:26 -- accel/accel.sh@20 -- # read -r var val 00:11:05.307 21:07:27 -- accel/accel.sh@21 -- # val= 00:11:05.307 21:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.307 21:07:27 -- accel/accel.sh@20 -- # IFS=: 00:11:05.307 21:07:27 -- accel/accel.sh@20 -- # read -r var val 00:11:05.307 21:07:27 -- accel/accel.sh@21 -- # val= 00:11:05.307 21:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.307 21:07:27 -- accel/accel.sh@20 -- # IFS=: 00:11:05.307 21:07:27 -- accel/accel.sh@20 -- # read -r var val 00:11:05.307 21:07:27 -- accel/accel.sh@21 -- # val= 00:11:05.307 21:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.307 21:07:27 -- accel/accel.sh@20 -- # IFS=: 00:11:05.307 21:07:27 -- accel/accel.sh@20 -- # read -r var val 00:11:05.307 21:07:27 -- accel/accel.sh@21 -- # val= 00:11:05.307 21:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.307 21:07:27 -- accel/accel.sh@20 -- # IFS=: 00:11:05.307 21:07:27 -- accel/accel.sh@20 -- # read -r var val 00:11:05.307 21:07:27 -- accel/accel.sh@21 -- # val= 00:11:05.307 21:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.307 21:07:27 -- accel/accel.sh@20 -- # IFS=: 00:11:05.307 21:07:27 -- accel/accel.sh@20 -- # read -r var val 00:11:05.307 21:07:27 -- accel/accel.sh@21 -- # val= 00:11:05.307 21:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.307 21:07:27 -- accel/accel.sh@20 -- # IFS=: 00:11:05.307 21:07:27 -- accel/accel.sh@20 -- # read -r var val 00:11:05.307 21:07:27 -- accel/accel.sh@21 -- # val= 00:11:05.307 21:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.307 21:07:27 -- accel/accel.sh@20 -- # IFS=: 00:11:05.307 21:07:27 -- accel/accel.sh@20 -- # read -r var val 00:11:05.307 21:07:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:05.307 21:07:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:05.307 21:07:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:05.307 00:11:05.307 real 0m3.247s 00:11:05.307 user 0m2.788s 00:11:05.307 sys 0m0.309s 00:11:05.307 ************************************ 00:11:05.307 END TEST accel_deomp_full_mthread 00:11:05.307 ************************************ 00:11:05.307 21:07:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.307 21:07:27 -- common/autotest_common.sh@10 -- # set +x 00:11:05.307 21:07:27 -- accel/accel.sh@116 -- # [[ n == y ]] 00:11:05.307 21:07:27 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:05.307 21:07:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:05.307 21:07:27 -- accel/accel.sh@129 -- # build_accel_config 00:11:05.307 21:07:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:05.307 21:07:27 -- common/autotest_common.sh@10 -- # set +x 00:11:05.307 21:07:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:05.307 21:07:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:05.307 21:07:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:05.307 21:07:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:05.307 21:07:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:05.307 21:07:27 -- accel/accel.sh@41 -- # local IFS=, 00:11:05.307 21:07:27 -- accel/accel.sh@42 -- # jq -r . 00:11:05.307 ************************************ 00:11:05.307 START TEST accel_dif_functional_tests 00:11:05.307 ************************************ 00:11:05.307 21:07:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:05.307 [2024-06-07 21:07:27.870801] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:05.307 [2024-06-07 21:07:27.871012] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121421 ] 00:11:05.566 [2024-06-07 21:07:28.055144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:05.566 [2024-06-07 21:07:28.133196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.566 [2024-06-07 21:07:28.133313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.566 [2024-06-07 21:07:28.133555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.566 00:11:05.566 00:11:05.566 CUnit - A unit testing framework for C - Version 2.1-3 00:11:05.566 http://cunit.sourceforge.net/ 00:11:05.566 00:11:05.566 00:11:05.566 Suite: accel_dif 00:11:05.566 Test: verify: DIF generated, GUARD check ...passed 00:11:05.566 Test: verify: DIF generated, APPTAG check ...passed 00:11:05.566 Test: verify: DIF generated, REFTAG check ...passed 00:11:05.566 Test: verify: DIF not generated, GUARD check ...[2024-06-07 21:07:28.230598] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:05.566 [2024-06-07 21:07:28.230750] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:05.566 passed 00:11:05.566 Test: verify: DIF not generated, APPTAG check ...[2024-06-07 21:07:28.231035] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:05.566 [2024-06-07 21:07:28.231140] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:05.566 passed 00:11:05.566 Test: verify: DIF not generated, REFTAG check ...[2024-06-07 21:07:28.231377] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:05.566 [2024-06-07 21:07:28.231468] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:05.566 passed 00:11:05.566 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:05.566 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-07 21:07:28.231952] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:05.566 passed 00:11:05.566 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:05.566 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:05.566 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:05.566 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-07 21:07:28.232703] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:05.566 passed 00:11:05.566 Test: generate copy: DIF generated, GUARD check ...passed 00:11:05.566 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:05.566 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:05.566 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:05.566 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:05.566 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:05.566 Test: generate copy: iovecs-len validate ...[2024-06-07 21:07:28.234232] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:05.566 passed 00:11:05.566 Test: generate copy: buffer alignment validate ...passed 00:11:05.566 00:11:05.566 Run Summary: Type Total Ran Passed Failed Inactive 00:11:05.566 suites 1 1 n/a 0 0 00:11:05.566 tests 20 20 20 0 0 00:11:05.566 asserts 204 204 204 0 n/a 00:11:05.566 00:11:05.566 Elapsed time = 0.010 seconds 00:11:05.825 00:11:05.825 real 0m0.686s 00:11:05.825 user 0m0.835s 00:11:05.825 sys 0m0.251s 00:11:05.825 21:07:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.825 21:07:28 -- common/autotest_common.sh@10 -- # set +x 00:11:05.825 ************************************ 00:11:05.825 END TEST accel_dif_functional_tests 00:11:05.825 ************************************ 00:11:06.084 ************************************ 00:11:06.084 END TEST accel 00:11:06.084 ************************************ 00:11:06.084 00:11:06.084 real 1m8.851s 00:11:06.084 user 1m12.748s 00:11:06.084 sys 0m8.390s 00:11:06.084 21:07:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.084 21:07:28 -- common/autotest_common.sh@10 -- # set +x 00:11:06.084 21:07:28 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:06.084 21:07:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:06.084 21:07:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:06.084 21:07:28 -- common/autotest_common.sh@10 -- # set +x 00:11:06.084 ************************************ 00:11:06.084 START TEST accel_rpc 00:11:06.084 ************************************ 00:11:06.084 21:07:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:06.084 * Looking for test storage... 00:11:06.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:06.084 21:07:28 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:06.084 21:07:28 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=121488 00:11:06.084 21:07:28 -- accel/accel_rpc.sh@15 -- # waitforlisten 121488 00:11:06.084 21:07:28 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:06.084 21:07:28 -- common/autotest_common.sh@819 -- # '[' -z 121488 ']' 00:11:06.084 21:07:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.084 21:07:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:06.084 21:07:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.084 21:07:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:06.084 21:07:28 -- common/autotest_common.sh@10 -- # set +x 00:11:06.084 [2024-06-07 21:07:28.720244] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:06.084 [2024-06-07 21:07:28.720477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121488 ] 00:11:06.343 [2024-06-07 21:07:28.880851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.343 [2024-06-07 21:07:28.972092] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:06.343 [2024-06-07 21:07:28.972448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.276 21:07:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:07.276 21:07:29 -- common/autotest_common.sh@852 -- # return 0 00:11:07.276 21:07:29 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:07.276 21:07:29 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:07.277 21:07:29 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:07.277 21:07:29 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:07.277 21:07:29 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:07.277 21:07:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:07.277 21:07:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:07.277 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:11:07.277 ************************************ 00:11:07.277 START TEST accel_assign_opcode 00:11:07.277 ************************************ 00:11:07.277 21:07:29 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:11:07.277 21:07:29 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:07.277 21:07:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:07.277 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:11:07.277 [2024-06-07 21:07:29.729423] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:07.277 21:07:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:07.277 21:07:29 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:07.277 21:07:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:07.277 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:11:07.277 [2024-06-07 21:07:29.737382] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:07.277 21:07:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:07.277 21:07:29 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:07.277 21:07:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:07.277 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:11:07.535 21:07:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:07.535 21:07:29 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:07.535 21:07:29 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:07.535 21:07:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:07.535 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:11:07.535 21:07:29 -- accel/accel_rpc.sh@42 -- # grep software 00:11:07.535 21:07:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:07.535 software 00:11:07.535 00:11:07.535 real 0m0.308s 00:11:07.535 user 0m0.056s 00:11:07.535 sys 0m0.004s 00:11:07.535 21:07:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.535 ************************************ 00:11:07.535 END TEST accel_assign_opcode 00:11:07.535 ************************************ 00:11:07.535 21:07:30 -- common/autotest_common.sh@10 -- # set +x 00:11:07.535 21:07:30 -- accel/accel_rpc.sh@55 -- # killprocess 121488 00:11:07.535 21:07:30 -- common/autotest_common.sh@926 -- # '[' -z 121488 ']' 00:11:07.535 21:07:30 -- common/autotest_common.sh@930 -- # kill -0 121488 00:11:07.535 21:07:30 -- common/autotest_common.sh@931 -- # uname 00:11:07.535 21:07:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:07.535 21:07:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121488 00:11:07.535 21:07:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:07.535 killing process with pid 121488 00:11:07.535 21:07:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:07.535 21:07:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121488' 00:11:07.535 21:07:30 -- common/autotest_common.sh@945 -- # kill 121488 00:11:07.535 21:07:30 -- common/autotest_common.sh@950 -- # wait 121488 00:11:08.102 ************************************ 00:11:08.102 END TEST accel_rpc 00:11:08.102 ************************************ 00:11:08.102 00:11:08.102 real 0m1.962s 00:11:08.102 user 0m2.073s 00:11:08.102 sys 0m0.449s 00:11:08.102 21:07:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.102 21:07:30 -- common/autotest_common.sh@10 -- # set +x 00:11:08.102 21:07:30 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:08.102 21:07:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:08.102 21:07:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:08.102 21:07:30 -- common/autotest_common.sh@10 -- # set +x 00:11:08.102 ************************************ 00:11:08.102 START TEST app_cmdline 00:11:08.102 ************************************ 00:11:08.102 21:07:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:08.102 * Looking for test storage... 00:11:08.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:08.102 21:07:30 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:08.102 21:07:30 -- app/cmdline.sh@17 -- # spdk_tgt_pid=121591 00:11:08.102 21:07:30 -- app/cmdline.sh@18 -- # waitforlisten 121591 00:11:08.102 21:07:30 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:08.102 21:07:30 -- common/autotest_common.sh@819 -- # '[' -z 121591 ']' 00:11:08.102 21:07:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.102 21:07:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:08.102 21:07:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.102 21:07:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:08.102 21:07:30 -- common/autotest_common.sh@10 -- # set +x 00:11:08.102 [2024-06-07 21:07:30.728643] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:08.102 [2024-06-07 21:07:30.728864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121591 ] 00:11:08.360 [2024-06-07 21:07:30.885539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.360 [2024-06-07 21:07:30.969693] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:08.360 [2024-06-07 21:07:30.970030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.295 21:07:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:09.295 21:07:31 -- common/autotest_common.sh@852 -- # return 0 00:11:09.295 21:07:31 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:09.295 { 00:11:09.295 "version": "SPDK v24.01.1-pre git sha1 130b9406a", 00:11:09.295 "fields": { 00:11:09.295 "major": 24, 00:11:09.295 "minor": 1, 00:11:09.295 "patch": 1, 00:11:09.295 "suffix": "-pre", 00:11:09.295 "commit": "130b9406a" 00:11:09.295 } 00:11:09.295 } 00:11:09.295 21:07:31 -- app/cmdline.sh@22 -- # expected_methods=() 00:11:09.295 21:07:31 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:09.295 21:07:31 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:09.295 21:07:31 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:09.295 21:07:31 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:09.295 21:07:31 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:09.295 21:07:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:09.295 21:07:31 -- app/cmdline.sh@26 -- # sort 00:11:09.295 21:07:31 -- common/autotest_common.sh@10 -- # set +x 00:11:09.295 21:07:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:09.554 21:07:31 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:09.554 21:07:31 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:09.554 21:07:31 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:09.554 21:07:31 -- common/autotest_common.sh@640 -- # local es=0 00:11:09.554 21:07:31 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:09.554 21:07:31 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.554 21:07:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:09.554 21:07:31 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.554 21:07:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:09.554 21:07:31 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.554 21:07:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:09.554 21:07:31 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.554 21:07:31 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:09.554 21:07:31 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:09.813 request: 00:11:09.813 { 00:11:09.813 "method": "env_dpdk_get_mem_stats", 00:11:09.813 "req_id": 1 00:11:09.813 } 00:11:09.813 Got JSON-RPC error response 00:11:09.813 response: 00:11:09.813 { 00:11:09.813 "code": -32601, 00:11:09.813 "message": "Method not found" 00:11:09.813 } 00:11:09.813 21:07:32 -- common/autotest_common.sh@643 -- # es=1 00:11:09.813 21:07:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:09.813 21:07:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:09.813 21:07:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:09.813 21:07:32 -- app/cmdline.sh@1 -- # killprocess 121591 00:11:09.813 21:07:32 -- common/autotest_common.sh@926 -- # '[' -z 121591 ']' 00:11:09.813 21:07:32 -- common/autotest_common.sh@930 -- # kill -0 121591 00:11:09.813 21:07:32 -- common/autotest_common.sh@931 -- # uname 00:11:09.813 21:07:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:09.813 21:07:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121591 00:11:09.813 21:07:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:09.813 killing process with pid 121591 00:11:09.813 21:07:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:09.813 21:07:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121591' 00:11:09.813 21:07:32 -- common/autotest_common.sh@945 -- # kill 121591 00:11:09.813 21:07:32 -- common/autotest_common.sh@950 -- # wait 121591 00:11:10.381 00:11:10.381 real 0m2.186s 00:11:10.381 user 0m2.702s 00:11:10.381 sys 0m0.491s 00:11:10.381 ************************************ 00:11:10.381 END TEST app_cmdline 00:11:10.381 ************************************ 00:11:10.381 21:07:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.381 21:07:32 -- common/autotest_common.sh@10 -- # set +x 00:11:10.381 21:07:32 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:10.381 21:07:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:10.381 21:07:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:10.381 21:07:32 -- common/autotest_common.sh@10 -- # set +x 00:11:10.381 ************************************ 00:11:10.381 START TEST version 00:11:10.381 ************************************ 00:11:10.381 21:07:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:10.381 * Looking for test storage... 00:11:10.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:10.381 21:07:32 -- app/version.sh@17 -- # get_header_version major 00:11:10.381 21:07:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:10.381 21:07:32 -- app/version.sh@14 -- # cut -f2 00:11:10.381 21:07:32 -- app/version.sh@14 -- # tr -d '"' 00:11:10.381 21:07:32 -- app/version.sh@17 -- # major=24 00:11:10.381 21:07:32 -- app/version.sh@18 -- # get_header_version minor 00:11:10.381 21:07:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:10.381 21:07:32 -- app/version.sh@14 -- # cut -f2 00:11:10.381 21:07:32 -- app/version.sh@14 -- # tr -d '"' 00:11:10.381 21:07:32 -- app/version.sh@18 -- # minor=1 00:11:10.381 21:07:32 -- app/version.sh@19 -- # get_header_version patch 00:11:10.381 21:07:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:10.381 21:07:32 -- app/version.sh@14 -- # cut -f2 00:11:10.381 21:07:32 -- app/version.sh@14 -- # tr -d '"' 00:11:10.381 21:07:32 -- app/version.sh@19 -- # patch=1 00:11:10.382 21:07:32 -- app/version.sh@20 -- # get_header_version suffix 00:11:10.382 21:07:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:10.382 21:07:32 -- app/version.sh@14 -- # cut -f2 00:11:10.382 21:07:32 -- app/version.sh@14 -- # tr -d '"' 00:11:10.382 21:07:32 -- app/version.sh@20 -- # suffix=-pre 00:11:10.382 21:07:32 -- app/version.sh@22 -- # version=24.1 00:11:10.382 21:07:32 -- app/version.sh@25 -- # (( patch != 0 )) 00:11:10.382 21:07:32 -- app/version.sh@25 -- # version=24.1.1 00:11:10.382 21:07:32 -- app/version.sh@28 -- # version=24.1.1rc0 00:11:10.382 21:07:32 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:10.382 21:07:32 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:10.382 21:07:32 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:11:10.382 21:07:32 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:11:10.382 00:11:10.382 real 0m0.147s 00:11:10.382 user 0m0.122s 00:11:10.382 sys 0m0.059s 00:11:10.382 21:07:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.382 ************************************ 00:11:10.382 END TEST version 00:11:10.382 ************************************ 00:11:10.382 21:07:32 -- common/autotest_common.sh@10 -- # set +x 00:11:10.382 21:07:33 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:11:10.382 21:07:33 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:10.382 21:07:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:10.382 21:07:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:10.382 21:07:33 -- common/autotest_common.sh@10 -- # set +x 00:11:10.382 ************************************ 00:11:10.382 START TEST blockdev_general 00:11:10.382 ************************************ 00:11:10.382 21:07:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:10.639 * Looking for test storage... 00:11:10.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:10.639 21:07:33 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:10.639 21:07:33 -- bdev/nbd_common.sh@6 -- # set -e 00:11:10.639 21:07:33 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:10.639 21:07:33 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:10.639 21:07:33 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:10.639 21:07:33 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:10.639 21:07:33 -- bdev/blockdev.sh@18 -- # : 00:11:10.639 21:07:33 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:10.639 21:07:33 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:10.639 21:07:33 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:10.639 21:07:33 -- bdev/blockdev.sh@672 -- # uname -s 00:11:10.639 21:07:33 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:10.639 21:07:33 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:10.639 21:07:33 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:11:10.639 21:07:33 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:10.639 21:07:33 -- bdev/blockdev.sh@682 -- # dek= 00:11:10.639 21:07:33 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:10.639 21:07:33 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:10.639 21:07:33 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:10.639 21:07:33 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:11:10.639 21:07:33 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:11:10.639 21:07:33 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:10.639 21:07:33 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=121751 00:11:10.639 21:07:33 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:10.639 21:07:33 -- bdev/blockdev.sh@47 -- # waitforlisten 121751 00:11:10.639 21:07:33 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:10.639 21:07:33 -- common/autotest_common.sh@819 -- # '[' -z 121751 ']' 00:11:10.639 21:07:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.639 21:07:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:10.639 21:07:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.639 21:07:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:10.639 21:07:33 -- common/autotest_common.sh@10 -- # set +x 00:11:10.639 [2024-06-07 21:07:33.189894] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:10.639 [2024-06-07 21:07:33.190162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121751 ] 00:11:10.897 [2024-06-07 21:07:33.359186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.897 [2024-06-07 21:07:33.473745] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:10.897 [2024-06-07 21:07:33.474026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.831 21:07:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:11.831 21:07:34 -- common/autotest_common.sh@852 -- # return 0 00:11:11.831 21:07:34 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:11.831 21:07:34 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:11:11.831 21:07:34 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:11:11.831 21:07:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:11.831 21:07:34 -- common/autotest_common.sh@10 -- # set +x 00:11:11.831 [2024-06-07 21:07:34.455199] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:11.831 [2024-06-07 21:07:34.455279] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:11.831 00:11:11.831 [2024-06-07 21:07:34.463165] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:11.831 [2024-06-07 21:07:34.463245] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:11.831 00:11:11.831 Malloc0 00:11:11.831 Malloc1 00:11:12.089 Malloc2 00:11:12.089 Malloc3 00:11:12.089 Malloc4 00:11:12.089 Malloc5 00:11:12.089 Malloc6 00:11:12.089 Malloc7 00:11:12.089 Malloc8 00:11:12.089 Malloc9 00:11:12.089 [2024-06-07 21:07:34.655512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:12.089 [2024-06-07 21:07:34.655623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.090 [2024-06-07 21:07:34.655688] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:12.090 [2024-06-07 21:07:34.655719] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.090 [2024-06-07 21:07:34.658696] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.090 [2024-06-07 21:07:34.658760] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:12.090 TestPT 00:11:12.090 21:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.090 21:07:34 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:12.090 5000+0 records in 00:11:12.090 5000+0 records out 00:11:12.090 10240000 bytes (10 MB, 9.8 MiB) copied, 0.025532 s, 401 MB/s 00:11:12.090 21:07:34 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:11:12.090 21:07:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.090 21:07:34 -- common/autotest_common.sh@10 -- # set +x 00:11:12.349 AIO0 00:11:12.349 21:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.349 21:07:34 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:12.349 21:07:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.349 21:07:34 -- common/autotest_common.sh@10 -- # set +x 00:11:12.349 21:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.349 21:07:34 -- bdev/blockdev.sh@738 -- # cat 00:11:12.349 21:07:34 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:12.349 21:07:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.349 21:07:34 -- common/autotest_common.sh@10 -- # set +x 00:11:12.349 21:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.349 21:07:34 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:12.349 21:07:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.349 21:07:34 -- common/autotest_common.sh@10 -- # set +x 00:11:12.349 21:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.349 21:07:34 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:12.349 21:07:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.349 21:07:34 -- common/autotest_common.sh@10 -- # set +x 00:11:12.349 21:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.349 21:07:34 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:12.349 21:07:34 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:12.349 21:07:34 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:12.349 21:07:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.349 21:07:34 -- common/autotest_common.sh@10 -- # set +x 00:11:12.349 21:07:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.349 21:07:34 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:12.349 21:07:34 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:12.350 21:07:34 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "6f5f75d9-2055-4154-b9b5-b34a803a6b5a"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "6f5f75d9-2055-4154-b9b5-b34a803a6b5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "5b783316-67eb-5541-951e-09cdfd0736d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5b783316-67eb-5541-951e-09cdfd0736d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "536a5e68-b353-5a08-98bc-1b8660082e93"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "536a5e68-b353-5a08-98bc-1b8660082e93",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "9df9b6b6-e0ea-5bd3-84c9-60add694c1e6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9df9b6b6-e0ea-5bd3-84c9-60add694c1e6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "8bca855e-87c3-5f70-9baf-6e58800d7517"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8bca855e-87c3-5f70-9baf-6e58800d7517",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c5478617-4dbb-569a-b447-b81a3cca53f7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c5478617-4dbb-569a-b447-b81a3cca53f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "5e8150f8-8d06-56c6-8b76-3886d087d31c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5e8150f8-8d06-56c6-8b76-3886d087d31c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "9223a87a-36ec-546e-bc8f-ffe39504c823"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9223a87a-36ec-546e-bc8f-ffe39504c823",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "dfe5af31-dec6-5b70-9e7d-1fe24d12d8c9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dfe5af31-dec6-5b70-9e7d-1fe24d12d8c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "e92f23b8-c09a-5805-88bb-d7eecf1126d7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e92f23b8-c09a-5805-88bb-d7eecf1126d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "fc1d0117-93f7-50a0-a73f-247f21d713cf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fc1d0117-93f7-50a0-a73f-247f21d713cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "fd5616a2-eb8f-5af6-9d0e-030f2a25da4b"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fd5616a2-eb8f-5af6-9d0e-030f2a25da4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "7d952d80-0b61-464b-9378-59fede82d8ca"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7d952d80-0b61-464b-9378-59fede82d8ca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7d952d80-0b61-464b-9378-59fede82d8ca",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a308ada7-81f0-4517-ac19-0be2c35eb139",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "44450791-1194-4e87-8fd0-226c7e312954",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "52005f19-ea6c-4eba-8025-3834e999df3a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "52005f19-ea6c-4eba-8025-3834e999df3a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "52005f19-ea6c-4eba-8025-3834e999df3a",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "31321730-cdbf-4756-9cea-6901f74ad6d8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "fff1b05b-9056-4c41-b695-1e0478979683",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "bdb83198-bb51-42ac-a995-606e21c25f35"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bdb83198-bb51-42ac-a995-606e21c25f35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "bdb83198-bb51-42ac-a995-606e21c25f35",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "6b1c1a61-e61b-4191-9386-c88d8ea6e21f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "3e8db668-9c30-4c46-9a65-1549532a05e2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "5806f2a7-94ff-49c5-89f6-9db788759411"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "5806f2a7-94ff-49c5-89f6-9db788759411",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:11:12.350 21:07:34 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:12.350 21:07:34 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:11:12.350 21:07:34 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:12.350 21:07:34 -- bdev/blockdev.sh@752 -- # killprocess 121751 00:11:12.350 21:07:34 -- common/autotest_common.sh@926 -- # '[' -z 121751 ']' 00:11:12.350 21:07:34 -- common/autotest_common.sh@930 -- # kill -0 121751 00:11:12.350 21:07:34 -- common/autotest_common.sh@931 -- # uname 00:11:12.350 21:07:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:12.350 21:07:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121751 00:11:12.350 21:07:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:12.350 21:07:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:12.350 21:07:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121751' 00:11:12.350 killing process with pid 121751 00:11:12.350 21:07:35 -- common/autotest_common.sh@945 -- # kill 121751 00:11:12.350 21:07:35 -- common/autotest_common.sh@950 -- # wait 121751 00:11:13.286 21:07:35 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:13.286 21:07:35 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:13.286 21:07:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:13.286 21:07:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:13.286 21:07:35 -- common/autotest_common.sh@10 -- # set +x 00:11:13.286 ************************************ 00:11:13.286 START TEST bdev_hello_world 00:11:13.286 ************************************ 00:11:13.286 21:07:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:13.286 [2024-06-07 21:07:35.713296] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:13.286 [2024-06-07 21:07:35.713576] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121829 ] 00:11:13.286 [2024-06-07 21:07:35.878875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.545 [2024-06-07 21:07:35.964615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.545 [2024-06-07 21:07:36.109198] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:13.545 [2024-06-07 21:07:36.109339] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:13.545 [2024-06-07 21:07:36.117093] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:13.545 [2024-06-07 21:07:36.117188] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:13.545 [2024-06-07 21:07:36.125132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:13.545 [2024-06-07 21:07:36.125205] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:13.545 [2024-06-07 21:07:36.125248] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:13.804 [2024-06-07 21:07:36.233349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:13.804 [2024-06-07 21:07:36.233541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.804 [2024-06-07 21:07:36.233614] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:13.804 [2024-06-07 21:07:36.233643] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.804 [2024-06-07 21:07:36.236807] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.804 [2024-06-07 21:07:36.236932] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:13.804 [2024-06-07 21:07:36.418693] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:13.804 [2024-06-07 21:07:36.418826] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:13.804 [2024-06-07 21:07:36.418989] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:13.804 [2024-06-07 21:07:36.419139] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:13.804 [2024-06-07 21:07:36.419304] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:13.805 [2024-06-07 21:07:36.419374] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:13.805 [2024-06-07 21:07:36.419474] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:13.805 00:11:13.805 [2024-06-07 21:07:36.419551] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:14.386 00:11:14.386 real 0m1.167s 00:11:14.386 user 0m0.685s 00:11:14.386 sys 0m0.324s 00:11:14.386 21:07:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.386 ************************************ 00:11:14.386 END TEST bdev_hello_world 00:11:14.386 ************************************ 00:11:14.386 21:07:36 -- common/autotest_common.sh@10 -- # set +x 00:11:14.386 21:07:36 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:14.386 21:07:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:14.386 21:07:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:14.386 21:07:36 -- common/autotest_common.sh@10 -- # set +x 00:11:14.386 ************************************ 00:11:14.386 START TEST bdev_bounds 00:11:14.386 ************************************ 00:11:14.386 21:07:36 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:11:14.386 21:07:36 -- bdev/blockdev.sh@288 -- # bdevio_pid=121874 00:11:14.386 21:07:36 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:14.386 Process bdevio pid: 121874 00:11:14.386 21:07:36 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 121874' 00:11:14.386 21:07:36 -- bdev/blockdev.sh@291 -- # waitforlisten 121874 00:11:14.386 21:07:36 -- common/autotest_common.sh@819 -- # '[' -z 121874 ']' 00:11:14.386 21:07:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.386 21:07:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:14.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.386 21:07:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.386 21:07:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:14.386 21:07:36 -- common/autotest_common.sh@10 -- # set +x 00:11:14.386 21:07:36 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:14.386 [2024-06-07 21:07:36.928608] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:14.386 [2024-06-07 21:07:36.929048] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121874 ] 00:11:14.706 [2024-06-07 21:07:37.122033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:14.706 [2024-06-07 21:07:37.203557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.706 [2024-06-07 21:07:37.203693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.706 [2024-06-07 21:07:37.203696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.706 [2024-06-07 21:07:37.351464] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:14.706 [2024-06-07 21:07:37.351623] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:14.706 [2024-06-07 21:07:37.359415] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:14.706 [2024-06-07 21:07:37.359522] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:14.706 [2024-06-07 21:07:37.367491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:14.706 [2024-06-07 21:07:37.367593] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:14.706 [2024-06-07 21:07:37.367628] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:14.964 [2024-06-07 21:07:37.465914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:14.964 [2024-06-07 21:07:37.466106] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.964 [2024-06-07 21:07:37.466175] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:14.964 [2024-06-07 21:07:37.466200] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.964 [2024-06-07 21:07:37.469298] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.964 [2024-06-07 21:07:37.469384] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:15.531 21:07:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:15.531 21:07:37 -- common/autotest_common.sh@852 -- # return 0 00:11:15.531 21:07:37 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:15.531 I/O targets: 00:11:15.531 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:15.531 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:15.531 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:15.531 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:15.531 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:15.531 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:15.531 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:15.531 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:15.531 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:15.531 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:15.531 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:15.531 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:15.531 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:15.531 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:15.531 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:15.531 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:15.531 00:11:15.531 00:11:15.531 CUnit - A unit testing framework for C - Version 2.1-3 00:11:15.531 http://cunit.sourceforge.net/ 00:11:15.531 00:11:15.531 00:11:15.531 Suite: bdevio tests on: AIO0 00:11:15.531 Test: blockdev write read block ...passed 00:11:15.531 Test: blockdev write zeroes read block ...passed 00:11:15.531 Test: blockdev write zeroes read no split ...passed 00:11:15.531 Test: blockdev write zeroes read split ...passed 00:11:15.531 Test: blockdev write zeroes read split partial ...passed 00:11:15.531 Test: blockdev reset ...passed 00:11:15.531 Test: blockdev write read 8 blocks ...passed 00:11:15.531 Test: blockdev write read size > 128k ...passed 00:11:15.531 Test: blockdev write read invalid size ...passed 00:11:15.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.531 Test: blockdev write read max offset ...passed 00:11:15.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.531 Test: blockdev writev readv 8 blocks ...passed 00:11:15.531 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.531 Test: blockdev writev readv block ...passed 00:11:15.531 Test: blockdev writev readv size > 128k ...passed 00:11:15.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.531 Test: blockdev comparev and writev ...passed 00:11:15.531 Test: blockdev nvme passthru rw ...passed 00:11:15.531 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.531 Test: blockdev nvme admin passthru ...passed 00:11:15.531 Test: blockdev copy ...passed 00:11:15.531 Suite: bdevio tests on: raid1 00:11:15.531 Test: blockdev write read block ...passed 00:11:15.531 Test: blockdev write zeroes read block ...passed 00:11:15.531 Test: blockdev write zeroes read no split ...passed 00:11:15.531 Test: blockdev write zeroes read split ...passed 00:11:15.531 Test: blockdev write zeroes read split partial ...passed 00:11:15.531 Test: blockdev reset ...passed 00:11:15.531 Test: blockdev write read 8 blocks ...passed 00:11:15.531 Test: blockdev write read size > 128k ...passed 00:11:15.531 Test: blockdev write read invalid size ...passed 00:11:15.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.531 Test: blockdev write read max offset ...passed 00:11:15.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.531 Test: blockdev writev readv 8 blocks ...passed 00:11:15.531 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.531 Test: blockdev writev readv block ...passed 00:11:15.531 Test: blockdev writev readv size > 128k ...passed 00:11:15.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.532 Test: blockdev comparev and writev ...passed 00:11:15.532 Test: blockdev nvme passthru rw ...passed 00:11:15.532 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.532 Test: blockdev nvme admin passthru ...passed 00:11:15.532 Test: blockdev copy ...passed 00:11:15.532 Suite: bdevio tests on: concat0 00:11:15.532 Test: blockdev write read block ...passed 00:11:15.532 Test: blockdev write zeroes read block ...passed 00:11:15.532 Test: blockdev write zeroes read no split ...passed 00:11:15.532 Test: blockdev write zeroes read split ...passed 00:11:15.532 Test: blockdev write zeroes read split partial ...passed 00:11:15.532 Test: blockdev reset ...passed 00:11:15.532 Test: blockdev write read 8 blocks ...passed 00:11:15.532 Test: blockdev write read size > 128k ...passed 00:11:15.532 Test: blockdev write read invalid size ...passed 00:11:15.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.532 Test: blockdev write read max offset ...passed 00:11:15.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.532 Test: blockdev writev readv 8 blocks ...passed 00:11:15.532 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.532 Test: blockdev writev readv block ...passed 00:11:15.532 Test: blockdev writev readv size > 128k ...passed 00:11:15.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.532 Test: blockdev comparev and writev ...passed 00:11:15.532 Test: blockdev nvme passthru rw ...passed 00:11:15.532 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.532 Test: blockdev nvme admin passthru ...passed 00:11:15.532 Test: blockdev copy ...passed 00:11:15.532 Suite: bdevio tests on: raid0 00:11:15.532 Test: blockdev write read block ...passed 00:11:15.532 Test: blockdev write zeroes read block ...passed 00:11:15.532 Test: blockdev write zeroes read no split ...passed 00:11:15.532 Test: blockdev write zeroes read split ...passed 00:11:15.532 Test: blockdev write zeroes read split partial ...passed 00:11:15.532 Test: blockdev reset ...passed 00:11:15.532 Test: blockdev write read 8 blocks ...passed 00:11:15.532 Test: blockdev write read size > 128k ...passed 00:11:15.532 Test: blockdev write read invalid size ...passed 00:11:15.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.532 Test: blockdev write read max offset ...passed 00:11:15.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.532 Test: blockdev writev readv 8 blocks ...passed 00:11:15.532 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.532 Test: blockdev writev readv block ...passed 00:11:15.532 Test: blockdev writev readv size > 128k ...passed 00:11:15.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.532 Test: blockdev comparev and writev ...passed 00:11:15.532 Test: blockdev nvme passthru rw ...passed 00:11:15.532 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.532 Test: blockdev nvme admin passthru ...passed 00:11:15.532 Test: blockdev copy ...passed 00:11:15.532 Suite: bdevio tests on: TestPT 00:11:15.532 Test: blockdev write read block ...passed 00:11:15.532 Test: blockdev write zeroes read block ...passed 00:11:15.532 Test: blockdev write zeroes read no split ...passed 00:11:15.532 Test: blockdev write zeroes read split ...passed 00:11:15.532 Test: blockdev write zeroes read split partial ...passed 00:11:15.532 Test: blockdev reset ...passed 00:11:15.532 Test: blockdev write read 8 blocks ...passed 00:11:15.532 Test: blockdev write read size > 128k ...passed 00:11:15.532 Test: blockdev write read invalid size ...passed 00:11:15.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.532 Test: blockdev write read max offset ...passed 00:11:15.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.532 Test: blockdev writev readv 8 blocks ...passed 00:11:15.532 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.532 Test: blockdev writev readv block ...passed 00:11:15.532 Test: blockdev writev readv size > 128k ...passed 00:11:15.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.532 Test: blockdev comparev and writev ...passed 00:11:15.532 Test: blockdev nvme passthru rw ...passed 00:11:15.532 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.532 Test: blockdev nvme admin passthru ...passed 00:11:15.532 Test: blockdev copy ...passed 00:11:15.532 Suite: bdevio tests on: Malloc2p7 00:11:15.532 Test: blockdev write read block ...passed 00:11:15.532 Test: blockdev write zeroes read block ...passed 00:11:15.532 Test: blockdev write zeroes read no split ...passed 00:11:15.532 Test: blockdev write zeroes read split ...passed 00:11:15.532 Test: blockdev write zeroes read split partial ...passed 00:11:15.532 Test: blockdev reset ...passed 00:11:15.532 Test: blockdev write read 8 blocks ...passed 00:11:15.532 Test: blockdev write read size > 128k ...passed 00:11:15.532 Test: blockdev write read invalid size ...passed 00:11:15.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.532 Test: blockdev write read max offset ...passed 00:11:15.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.532 Test: blockdev writev readv 8 blocks ...passed 00:11:15.532 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.532 Test: blockdev writev readv block ...passed 00:11:15.532 Test: blockdev writev readv size > 128k ...passed 00:11:15.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.532 Test: blockdev comparev and writev ...passed 00:11:15.532 Test: blockdev nvme passthru rw ...passed 00:11:15.532 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.532 Test: blockdev nvme admin passthru ...passed 00:11:15.532 Test: blockdev copy ...passed 00:11:15.532 Suite: bdevio tests on: Malloc2p6 00:11:15.532 Test: blockdev write read block ...passed 00:11:15.532 Test: blockdev write zeroes read block ...passed 00:11:15.532 Test: blockdev write zeroes read no split ...passed 00:11:15.532 Test: blockdev write zeroes read split ...passed 00:11:15.532 Test: blockdev write zeroes read split partial ...passed 00:11:15.532 Test: blockdev reset ...passed 00:11:15.532 Test: blockdev write read 8 blocks ...passed 00:11:15.532 Test: blockdev write read size > 128k ...passed 00:11:15.532 Test: blockdev write read invalid size ...passed 00:11:15.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.532 Test: blockdev write read max offset ...passed 00:11:15.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.532 Test: blockdev writev readv 8 blocks ...passed 00:11:15.532 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.532 Test: blockdev writev readv block ...passed 00:11:15.532 Test: blockdev writev readv size > 128k ...passed 00:11:15.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.532 Test: blockdev comparev and writev ...passed 00:11:15.532 Test: blockdev nvme passthru rw ...passed 00:11:15.532 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.532 Test: blockdev nvme admin passthru ...passed 00:11:15.532 Test: blockdev copy ...passed 00:11:15.532 Suite: bdevio tests on: Malloc2p5 00:11:15.532 Test: blockdev write read block ...passed 00:11:15.532 Test: blockdev write zeroes read block ...passed 00:11:15.532 Test: blockdev write zeroes read no split ...passed 00:11:15.532 Test: blockdev write zeroes read split ...passed 00:11:15.532 Test: blockdev write zeroes read split partial ...passed 00:11:15.532 Test: blockdev reset ...passed 00:11:15.532 Test: blockdev write read 8 blocks ...passed 00:11:15.532 Test: blockdev write read size > 128k ...passed 00:11:15.532 Test: blockdev write read invalid size ...passed 00:11:15.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.532 Test: blockdev write read max offset ...passed 00:11:15.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.532 Test: blockdev writev readv 8 blocks ...passed 00:11:15.532 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.532 Test: blockdev writev readv block ...passed 00:11:15.532 Test: blockdev writev readv size > 128k ...passed 00:11:15.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.532 Test: blockdev comparev and writev ...passed 00:11:15.532 Test: blockdev nvme passthru rw ...passed 00:11:15.532 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.532 Test: blockdev nvme admin passthru ...passed 00:11:15.532 Test: blockdev copy ...passed 00:11:15.532 Suite: bdevio tests on: Malloc2p4 00:11:15.532 Test: blockdev write read block ...passed 00:11:15.532 Test: blockdev write zeroes read block ...passed 00:11:15.532 Test: blockdev write zeroes read no split ...passed 00:11:15.532 Test: blockdev write zeroes read split ...passed 00:11:15.532 Test: blockdev write zeroes read split partial ...passed 00:11:15.532 Test: blockdev reset ...passed 00:11:15.532 Test: blockdev write read 8 blocks ...passed 00:11:15.532 Test: blockdev write read size > 128k ...passed 00:11:15.532 Test: blockdev write read invalid size ...passed 00:11:15.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.532 Test: blockdev write read max offset ...passed 00:11:15.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.532 Test: blockdev writev readv 8 blocks ...passed 00:11:15.532 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.532 Test: blockdev writev readv block ...passed 00:11:15.532 Test: blockdev writev readv size > 128k ...passed 00:11:15.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.532 Test: blockdev comparev and writev ...passed 00:11:15.532 Test: blockdev nvme passthru rw ...passed 00:11:15.532 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.532 Test: blockdev nvme admin passthru ...passed 00:11:15.532 Test: blockdev copy ...passed 00:11:15.532 Suite: bdevio tests on: Malloc2p3 00:11:15.533 Test: blockdev write read block ...passed 00:11:15.533 Test: blockdev write zeroes read block ...passed 00:11:15.533 Test: blockdev write zeroes read no split ...passed 00:11:15.533 Test: blockdev write zeroes read split ...passed 00:11:15.533 Test: blockdev write zeroes read split partial ...passed 00:11:15.533 Test: blockdev reset ...passed 00:11:15.533 Test: blockdev write read 8 blocks ...passed 00:11:15.533 Test: blockdev write read size > 128k ...passed 00:11:15.533 Test: blockdev write read invalid size ...passed 00:11:15.533 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.533 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.533 Test: blockdev write read max offset ...passed 00:11:15.533 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.533 Test: blockdev writev readv 8 blocks ...passed 00:11:15.533 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.533 Test: blockdev writev readv block ...passed 00:11:15.533 Test: blockdev writev readv size > 128k ...passed 00:11:15.533 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.533 Test: blockdev comparev and writev ...passed 00:11:15.533 Test: blockdev nvme passthru rw ...passed 00:11:15.533 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.533 Test: blockdev nvme admin passthru ...passed 00:11:15.533 Test: blockdev copy ...passed 00:11:15.533 Suite: bdevio tests on: Malloc2p2 00:11:15.533 Test: blockdev write read block ...passed 00:11:15.533 Test: blockdev write zeroes read block ...passed 00:11:15.533 Test: blockdev write zeroes read no split ...passed 00:11:15.533 Test: blockdev write zeroes read split ...passed 00:11:15.792 Test: blockdev write zeroes read split partial ...passed 00:11:15.792 Test: blockdev reset ...passed 00:11:15.792 Test: blockdev write read 8 blocks ...passed 00:11:15.792 Test: blockdev write read size > 128k ...passed 00:11:15.792 Test: blockdev write read invalid size ...passed 00:11:15.792 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.792 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.792 Test: blockdev write read max offset ...passed 00:11:15.792 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.792 Test: blockdev writev readv 8 blocks ...passed 00:11:15.792 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.792 Test: blockdev writev readv block ...passed 00:11:15.792 Test: blockdev writev readv size > 128k ...passed 00:11:15.792 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.792 Test: blockdev comparev and writev ...passed 00:11:15.792 Test: blockdev nvme passthru rw ...passed 00:11:15.792 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.792 Test: blockdev nvme admin passthru ...passed 00:11:15.792 Test: blockdev copy ...passed 00:11:15.792 Suite: bdevio tests on: Malloc2p1 00:11:15.792 Test: blockdev write read block ...passed 00:11:15.792 Test: blockdev write zeroes read block ...passed 00:11:15.792 Test: blockdev write zeroes read no split ...passed 00:11:15.792 Test: blockdev write zeroes read split ...passed 00:11:15.792 Test: blockdev write zeroes read split partial ...passed 00:11:15.792 Test: blockdev reset ...passed 00:11:15.792 Test: blockdev write read 8 blocks ...passed 00:11:15.792 Test: blockdev write read size > 128k ...passed 00:11:15.793 Test: blockdev write read invalid size ...passed 00:11:15.793 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.793 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.793 Test: blockdev write read max offset ...passed 00:11:15.793 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.793 Test: blockdev writev readv 8 blocks ...passed 00:11:15.793 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.793 Test: blockdev writev readv block ...passed 00:11:15.793 Test: blockdev writev readv size > 128k ...passed 00:11:15.793 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.793 Test: blockdev comparev and writev ...passed 00:11:15.793 Test: blockdev nvme passthru rw ...passed 00:11:15.793 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.793 Test: blockdev nvme admin passthru ...passed 00:11:15.793 Test: blockdev copy ...passed 00:11:15.793 Suite: bdevio tests on: Malloc2p0 00:11:15.793 Test: blockdev write read block ...passed 00:11:15.793 Test: blockdev write zeroes read block ...passed 00:11:15.793 Test: blockdev write zeroes read no split ...passed 00:11:15.793 Test: blockdev write zeroes read split ...passed 00:11:15.793 Test: blockdev write zeroes read split partial ...passed 00:11:15.793 Test: blockdev reset ...passed 00:11:15.793 Test: blockdev write read 8 blocks ...passed 00:11:15.793 Test: blockdev write read size > 128k ...passed 00:11:15.793 Test: blockdev write read invalid size ...passed 00:11:15.793 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.793 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.793 Test: blockdev write read max offset ...passed 00:11:15.793 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.793 Test: blockdev writev readv 8 blocks ...passed 00:11:15.793 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.793 Test: blockdev writev readv block ...passed 00:11:15.793 Test: blockdev writev readv size > 128k ...passed 00:11:15.793 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.793 Test: blockdev comparev and writev ...passed 00:11:15.793 Test: blockdev nvme passthru rw ...passed 00:11:15.793 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.793 Test: blockdev nvme admin passthru ...passed 00:11:15.793 Test: blockdev copy ...passed 00:11:15.793 Suite: bdevio tests on: Malloc1p1 00:11:15.793 Test: blockdev write read block ...passed 00:11:15.793 Test: blockdev write zeroes read block ...passed 00:11:15.793 Test: blockdev write zeroes read no split ...passed 00:11:15.793 Test: blockdev write zeroes read split ...passed 00:11:15.793 Test: blockdev write zeroes read split partial ...passed 00:11:15.793 Test: blockdev reset ...passed 00:11:15.793 Test: blockdev write read 8 blocks ...passed 00:11:15.793 Test: blockdev write read size > 128k ...passed 00:11:15.793 Test: blockdev write read invalid size ...passed 00:11:15.793 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.793 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.793 Test: blockdev write read max offset ...passed 00:11:15.793 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.793 Test: blockdev writev readv 8 blocks ...passed 00:11:15.793 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.793 Test: blockdev writev readv block ...passed 00:11:15.793 Test: blockdev writev readv size > 128k ...passed 00:11:15.793 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.793 Test: blockdev comparev and writev ...passed 00:11:15.793 Test: blockdev nvme passthru rw ...passed 00:11:15.793 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.793 Test: blockdev nvme admin passthru ...passed 00:11:15.793 Test: blockdev copy ...passed 00:11:15.793 Suite: bdevio tests on: Malloc1p0 00:11:15.793 Test: blockdev write read block ...passed 00:11:15.793 Test: blockdev write zeroes read block ...passed 00:11:15.793 Test: blockdev write zeroes read no split ...passed 00:11:15.793 Test: blockdev write zeroes read split ...passed 00:11:15.793 Test: blockdev write zeroes read split partial ...passed 00:11:15.793 Test: blockdev reset ...passed 00:11:15.793 Test: blockdev write read 8 blocks ...passed 00:11:15.793 Test: blockdev write read size > 128k ...passed 00:11:15.793 Test: blockdev write read invalid size ...passed 00:11:15.793 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.793 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.793 Test: blockdev write read max offset ...passed 00:11:15.793 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.793 Test: blockdev writev readv 8 blocks ...passed 00:11:15.793 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.793 Test: blockdev writev readv block ...passed 00:11:15.793 Test: blockdev writev readv size > 128k ...passed 00:11:15.793 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.793 Test: blockdev comparev and writev ...passed 00:11:15.793 Test: blockdev nvme passthru rw ...passed 00:11:15.793 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.793 Test: blockdev nvme admin passthru ...passed 00:11:15.793 Test: blockdev copy ...passed 00:11:15.793 Suite: bdevio tests on: Malloc0 00:11:15.793 Test: blockdev write read block ...passed 00:11:15.793 Test: blockdev write zeroes read block ...passed 00:11:15.793 Test: blockdev write zeroes read no split ...passed 00:11:15.793 Test: blockdev write zeroes read split ...passed 00:11:15.793 Test: blockdev write zeroes read split partial ...passed 00:11:15.793 Test: blockdev reset ...passed 00:11:15.793 Test: blockdev write read 8 blocks ...passed 00:11:15.793 Test: blockdev write read size > 128k ...passed 00:11:15.793 Test: blockdev write read invalid size ...passed 00:11:15.793 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.793 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.793 Test: blockdev write read max offset ...passed 00:11:15.793 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.793 Test: blockdev writev readv 8 blocks ...passed 00:11:15.793 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.793 Test: blockdev writev readv block ...passed 00:11:15.793 Test: blockdev writev readv size > 128k ...passed 00:11:15.793 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.793 Test: blockdev comparev and writev ...passed 00:11:15.793 Test: blockdev nvme passthru rw ...passed 00:11:15.793 Test: blockdev nvme passthru vendor specific ...passed 00:11:15.793 Test: blockdev nvme admin passthru ...passed 00:11:15.793 Test: blockdev copy ...passed 00:11:15.793 00:11:15.793 Run Summary: Type Total Ran Passed Failed Inactive 00:11:15.793 suites 16 16 n/a 0 0 00:11:15.793 tests 368 368 368 0 0 00:11:15.793 asserts 2224 2224 2224 0 n/a 00:11:15.793 00:11:15.793 Elapsed time = 0.671 seconds 00:11:15.793 0 00:11:15.793 21:07:38 -- bdev/blockdev.sh@293 -- # killprocess 121874 00:11:15.793 21:07:38 -- common/autotest_common.sh@926 -- # '[' -z 121874 ']' 00:11:15.793 21:07:38 -- common/autotest_common.sh@930 -- # kill -0 121874 00:11:15.793 21:07:38 -- common/autotest_common.sh@931 -- # uname 00:11:15.793 21:07:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:15.793 21:07:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121874 00:11:15.793 21:07:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:15.793 killing process with pid 121874 00:11:15.793 21:07:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:15.793 21:07:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121874' 00:11:15.793 21:07:38 -- common/autotest_common.sh@945 -- # kill 121874 00:11:15.793 21:07:38 -- common/autotest_common.sh@950 -- # wait 121874 00:11:16.051 21:07:38 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:16.051 00:11:16.051 real 0m1.838s 00:11:16.051 user 0m4.340s 00:11:16.051 sys 0m0.416s 00:11:16.051 21:07:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.051 ************************************ 00:11:16.051 END TEST bdev_bounds 00:11:16.051 ************************************ 00:11:16.051 21:07:38 -- common/autotest_common.sh@10 -- # set +x 00:11:16.310 21:07:38 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:16.310 21:07:38 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:16.310 21:07:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:16.310 21:07:38 -- common/autotest_common.sh@10 -- # set +x 00:11:16.310 ************************************ 00:11:16.310 START TEST bdev_nbd 00:11:16.310 ************************************ 00:11:16.310 21:07:38 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:16.310 21:07:38 -- bdev/blockdev.sh@298 -- # uname -s 00:11:16.310 21:07:38 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:16.310 21:07:38 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.310 21:07:38 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:16.310 21:07:38 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:11:16.310 21:07:38 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:16.310 21:07:38 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:11:16.310 21:07:38 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:16.310 21:07:38 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:11:16.310 21:07:38 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:16.310 21:07:38 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:11:16.310 21:07:38 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:11:16.310 21:07:38 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:16.310 21:07:38 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:11:16.310 21:07:38 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:16.310 21:07:38 -- bdev/blockdev.sh@316 -- # nbd_pid=121927 00:11:16.310 21:07:38 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:16.310 21:07:38 -- bdev/blockdev.sh@318 -- # waitforlisten 121927 /var/tmp/spdk-nbd.sock 00:11:16.310 21:07:38 -- common/autotest_common.sh@819 -- # '[' -z 121927 ']' 00:11:16.310 21:07:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:16.310 21:07:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:16.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:16.310 21:07:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:16.310 21:07:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:16.310 21:07:38 -- common/autotest_common.sh@10 -- # set +x 00:11:16.310 21:07:38 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:16.310 [2024-06-07 21:07:38.821967] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:16.310 [2024-06-07 21:07:38.822168] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.310 [2024-06-07 21:07:38.981227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.569 [2024-06-07 21:07:39.074537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.569 [2024-06-07 21:07:39.229096] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:16.569 [2024-06-07 21:07:39.229206] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:16.569 [2024-06-07 21:07:39.236902] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:16.569 [2024-06-07 21:07:39.237000] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:16.828 [2024-06-07 21:07:39.244941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:16.828 [2024-06-07 21:07:39.245035] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:16.828 [2024-06-07 21:07:39.245097] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:16.828 [2024-06-07 21:07:39.352542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:16.828 [2024-06-07 21:07:39.352746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.828 [2024-06-07 21:07:39.352800] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:16.828 [2024-06-07 21:07:39.352828] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.828 [2024-06-07 21:07:39.355620] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.828 [2024-06-07 21:07:39.355683] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:17.395 21:07:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:17.395 21:07:39 -- common/autotest_common.sh@852 -- # return 0 00:11:17.395 21:07:39 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:17.395 21:07:39 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:17.395 21:07:39 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:11:17.395 21:07:39 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:17.395 21:07:39 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:17.395 21:07:39 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:17.395 21:07:39 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:11:17.395 21:07:39 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:17.395 21:07:39 -- bdev/nbd_common.sh@24 -- # local i 00:11:17.395 21:07:39 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:17.395 21:07:39 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:17.395 21:07:39 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.395 21:07:39 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:17.654 21:07:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:17.654 21:07:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:17.654 21:07:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:17.654 21:07:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:11:17.654 21:07:40 -- common/autotest_common.sh@857 -- # local i 00:11:17.654 21:07:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:17.654 21:07:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:17.654 21:07:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:11:17.654 21:07:40 -- common/autotest_common.sh@861 -- # break 00:11:17.654 21:07:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:17.654 21:07:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:17.654 21:07:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.654 1+0 records in 00:11:17.654 1+0 records out 00:11:17.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030103 s, 13.6 MB/s 00:11:17.654 21:07:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.654 21:07:40 -- common/autotest_common.sh@874 -- # size=4096 00:11:17.654 21:07:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.654 21:07:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:17.654 21:07:40 -- common/autotest_common.sh@877 -- # return 0 00:11:17.654 21:07:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:17.654 21:07:40 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.654 21:07:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:17.912 21:07:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:17.912 21:07:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:17.912 21:07:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:17.912 21:07:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:11:17.912 21:07:40 -- common/autotest_common.sh@857 -- # local i 00:11:17.912 21:07:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:17.912 21:07:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:17.912 21:07:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:11:17.912 21:07:40 -- common/autotest_common.sh@861 -- # break 00:11:17.912 21:07:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:17.912 21:07:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:17.912 21:07:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.912 1+0 records in 00:11:17.912 1+0 records out 00:11:17.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398807 s, 10.3 MB/s 00:11:17.912 21:07:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.912 21:07:40 -- common/autotest_common.sh@874 -- # size=4096 00:11:17.912 21:07:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.912 21:07:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:17.912 21:07:40 -- common/autotest_common.sh@877 -- # return 0 00:11:17.912 21:07:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:17.912 21:07:40 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.912 21:07:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:18.171 21:07:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:18.171 21:07:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:18.171 21:07:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:18.171 21:07:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:11:18.171 21:07:40 -- common/autotest_common.sh@857 -- # local i 00:11:18.171 21:07:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:18.171 21:07:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:18.171 21:07:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:11:18.171 21:07:40 -- common/autotest_common.sh@861 -- # break 00:11:18.171 21:07:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:18.171 21:07:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:18.171 21:07:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.171 1+0 records in 00:11:18.171 1+0 records out 00:11:18.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000718263 s, 5.7 MB/s 00:11:18.171 21:07:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.171 21:07:40 -- common/autotest_common.sh@874 -- # size=4096 00:11:18.171 21:07:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.171 21:07:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:18.171 21:07:40 -- common/autotest_common.sh@877 -- # return 0 00:11:18.171 21:07:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.171 21:07:40 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:18.171 21:07:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:18.430 21:07:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:18.430 21:07:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:18.430 21:07:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:18.430 21:07:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:11:18.430 21:07:40 -- common/autotest_common.sh@857 -- # local i 00:11:18.430 21:07:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:18.430 21:07:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:18.430 21:07:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:11:18.430 21:07:40 -- common/autotest_common.sh@861 -- # break 00:11:18.430 21:07:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:18.430 21:07:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:18.430 21:07:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.430 1+0 records in 00:11:18.430 1+0 records out 00:11:18.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343671 s, 11.9 MB/s 00:11:18.430 21:07:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.430 21:07:40 -- common/autotest_common.sh@874 -- # size=4096 00:11:18.430 21:07:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.430 21:07:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:18.430 21:07:40 -- common/autotest_common.sh@877 -- # return 0 00:11:18.430 21:07:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.430 21:07:40 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:18.430 21:07:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:18.688 21:07:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:18.688 21:07:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:18.688 21:07:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:18.688 21:07:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:11:18.688 21:07:41 -- common/autotest_common.sh@857 -- # local i 00:11:18.688 21:07:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:18.688 21:07:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:18.688 21:07:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:11:18.688 21:07:41 -- common/autotest_common.sh@861 -- # break 00:11:18.688 21:07:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:18.688 21:07:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:18.688 21:07:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.688 1+0 records in 00:11:18.688 1+0 records out 00:11:18.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404922 s, 10.1 MB/s 00:11:18.688 21:07:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.688 21:07:41 -- common/autotest_common.sh@874 -- # size=4096 00:11:18.688 21:07:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.688 21:07:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:18.688 21:07:41 -- common/autotest_common.sh@877 -- # return 0 00:11:18.688 21:07:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.688 21:07:41 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:18.688 21:07:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:18.947 21:07:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:18.947 21:07:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:18.947 21:07:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:18.947 21:07:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:11:18.947 21:07:41 -- common/autotest_common.sh@857 -- # local i 00:11:18.947 21:07:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:18.947 21:07:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:18.947 21:07:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:11:18.947 21:07:41 -- common/autotest_common.sh@861 -- # break 00:11:18.947 21:07:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:18.947 21:07:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:18.947 21:07:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.947 1+0 records in 00:11:18.947 1+0 records out 00:11:18.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538823 s, 7.6 MB/s 00:11:18.947 21:07:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.947 21:07:41 -- common/autotest_common.sh@874 -- # size=4096 00:11:18.947 21:07:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.947 21:07:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:18.947 21:07:41 -- common/autotest_common.sh@877 -- # return 0 00:11:18.947 21:07:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.947 21:07:41 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:18.947 21:07:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:19.206 21:07:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:19.206 21:07:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:19.206 21:07:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:19.206 21:07:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:11:19.206 21:07:41 -- common/autotest_common.sh@857 -- # local i 00:11:19.206 21:07:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:19.206 21:07:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:19.206 21:07:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:11:19.206 21:07:41 -- common/autotest_common.sh@861 -- # break 00:11:19.206 21:07:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:19.206 21:07:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:19.206 21:07:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.206 1+0 records in 00:11:19.206 1+0 records out 00:11:19.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466612 s, 8.8 MB/s 00:11:19.206 21:07:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.206 21:07:41 -- common/autotest_common.sh@874 -- # size=4096 00:11:19.206 21:07:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.206 21:07:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:19.206 21:07:41 -- common/autotest_common.sh@877 -- # return 0 00:11:19.206 21:07:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:19.206 21:07:41 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:19.206 21:07:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:19.464 21:07:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:19.464 21:07:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:19.464 21:07:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:19.464 21:07:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:11:19.464 21:07:42 -- common/autotest_common.sh@857 -- # local i 00:11:19.464 21:07:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:19.464 21:07:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:19.464 21:07:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:11:19.464 21:07:42 -- common/autotest_common.sh@861 -- # break 00:11:19.464 21:07:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:19.464 21:07:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:19.464 21:07:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.464 1+0 records in 00:11:19.464 1+0 records out 00:11:19.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360951 s, 11.3 MB/s 00:11:19.464 21:07:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.464 21:07:42 -- common/autotest_common.sh@874 -- # size=4096 00:11:19.464 21:07:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.464 21:07:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:19.464 21:07:42 -- common/autotest_common.sh@877 -- # return 0 00:11:19.464 21:07:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:19.464 21:07:42 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:19.464 21:07:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:19.723 21:07:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:19.723 21:07:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:19.723 21:07:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:19.723 21:07:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:11:19.723 21:07:42 -- common/autotest_common.sh@857 -- # local i 00:11:19.723 21:07:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:19.723 21:07:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:19.723 21:07:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:11:19.723 21:07:42 -- common/autotest_common.sh@861 -- # break 00:11:19.723 21:07:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:19.723 21:07:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:19.723 21:07:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.723 1+0 records in 00:11:19.723 1+0 records out 00:11:19.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100508 s, 4.1 MB/s 00:11:19.723 21:07:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.723 21:07:42 -- common/autotest_common.sh@874 -- # size=4096 00:11:19.723 21:07:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.723 21:07:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:19.723 21:07:42 -- common/autotest_common.sh@877 -- # return 0 00:11:19.723 21:07:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:19.723 21:07:42 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:19.723 21:07:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:19.982 21:07:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:19.982 21:07:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:19.982 21:07:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:19.982 21:07:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:11:19.982 21:07:42 -- common/autotest_common.sh@857 -- # local i 00:11:19.982 21:07:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:19.982 21:07:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:19.982 21:07:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:11:19.982 21:07:42 -- common/autotest_common.sh@861 -- # break 00:11:19.982 21:07:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:19.982 21:07:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:19.982 21:07:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.982 1+0 records in 00:11:19.982 1+0 records out 00:11:19.982 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678815 s, 6.0 MB/s 00:11:19.982 21:07:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.982 21:07:42 -- common/autotest_common.sh@874 -- # size=4096 00:11:19.982 21:07:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.982 21:07:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:19.982 21:07:42 -- common/autotest_common.sh@877 -- # return 0 00:11:19.982 21:07:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:19.982 21:07:42 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:19.982 21:07:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:20.243 21:07:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:20.243 21:07:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:20.243 21:07:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:20.243 21:07:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:11:20.243 21:07:42 -- common/autotest_common.sh@857 -- # local i 00:11:20.243 21:07:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:20.243 21:07:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:20.243 21:07:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:11:20.243 21:07:42 -- common/autotest_common.sh@861 -- # break 00:11:20.243 21:07:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:20.243 21:07:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:20.243 21:07:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.539 1+0 records in 00:11:20.539 1+0 records out 00:11:20.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525056 s, 7.8 MB/s 00:11:20.539 21:07:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.539 21:07:42 -- common/autotest_common.sh@874 -- # size=4096 00:11:20.539 21:07:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.539 21:07:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:20.539 21:07:42 -- common/autotest_common.sh@877 -- # return 0 00:11:20.539 21:07:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:20.539 21:07:42 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:20.539 21:07:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:20.539 21:07:43 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:20.539 21:07:43 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:20.797 21:07:43 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:20.797 21:07:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:11:20.797 21:07:43 -- common/autotest_common.sh@857 -- # local i 00:11:20.797 21:07:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:20.797 21:07:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:20.797 21:07:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:11:20.797 21:07:43 -- common/autotest_common.sh@861 -- # break 00:11:20.797 21:07:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:20.797 21:07:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:20.797 21:07:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.797 1+0 records in 00:11:20.797 1+0 records out 00:11:20.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582388 s, 7.0 MB/s 00:11:20.797 21:07:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.797 21:07:43 -- common/autotest_common.sh@874 -- # size=4096 00:11:20.797 21:07:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.797 21:07:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:20.797 21:07:43 -- common/autotest_common.sh@877 -- # return 0 00:11:20.797 21:07:43 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:20.797 21:07:43 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:20.797 21:07:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:20.797 21:07:43 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:20.797 21:07:43 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:20.797 21:07:43 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:20.797 21:07:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:11:20.797 21:07:43 -- common/autotest_common.sh@857 -- # local i 00:11:20.797 21:07:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:20.797 21:07:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:20.797 21:07:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:11:20.797 21:07:43 -- common/autotest_common.sh@861 -- # break 00:11:20.797 21:07:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:20.797 21:07:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:20.797 21:07:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.797 1+0 records in 00:11:20.797 1+0 records out 00:11:20.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066571 s, 6.2 MB/s 00:11:20.797 21:07:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.062 21:07:43 -- common/autotest_common.sh@874 -- # size=4096 00:11:21.062 21:07:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.062 21:07:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:21.062 21:07:43 -- common/autotest_common.sh@877 -- # return 0 00:11:21.062 21:07:43 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:21.062 21:07:43 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:21.062 21:07:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:21.062 21:07:43 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:21.062 21:07:43 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:21.062 21:07:43 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:21.062 21:07:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:11:21.062 21:07:43 -- common/autotest_common.sh@857 -- # local i 00:11:21.062 21:07:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:21.062 21:07:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:21.062 21:07:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:11:21.062 21:07:43 -- common/autotest_common.sh@861 -- # break 00:11:21.062 21:07:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:21.062 21:07:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:21.062 21:07:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:21.320 1+0 records in 00:11:21.320 1+0 records out 00:11:21.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104039 s, 3.9 MB/s 00:11:21.320 21:07:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.320 21:07:43 -- common/autotest_common.sh@874 -- # size=4096 00:11:21.320 21:07:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.320 21:07:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:21.320 21:07:43 -- common/autotest_common.sh@877 -- # return 0 00:11:21.320 21:07:43 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:21.320 21:07:43 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:21.320 21:07:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:21.578 21:07:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:21.578 21:07:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:21.578 21:07:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:21.578 21:07:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:11:21.578 21:07:44 -- common/autotest_common.sh@857 -- # local i 00:11:21.578 21:07:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:21.578 21:07:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:21.578 21:07:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:11:21.578 21:07:44 -- common/autotest_common.sh@861 -- # break 00:11:21.578 21:07:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:21.578 21:07:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:21.578 21:07:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:21.578 1+0 records in 00:11:21.578 1+0 records out 00:11:21.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00086303 s, 4.7 MB/s 00:11:21.579 21:07:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.579 21:07:44 -- common/autotest_common.sh@874 -- # size=4096 00:11:21.579 21:07:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.579 21:07:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:21.579 21:07:44 -- common/autotest_common.sh@877 -- # return 0 00:11:21.579 21:07:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:21.579 21:07:44 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:21.579 21:07:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:21.837 21:07:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:21.837 21:07:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:21.837 21:07:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:21.837 21:07:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:11:21.837 21:07:44 -- common/autotest_common.sh@857 -- # local i 00:11:21.837 21:07:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:21.837 21:07:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:21.837 21:07:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:11:21.837 21:07:44 -- common/autotest_common.sh@861 -- # break 00:11:21.837 21:07:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:21.837 21:07:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:21.837 21:07:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:21.837 1+0 records in 00:11:21.837 1+0 records out 00:11:21.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106536 s, 3.8 MB/s 00:11:21.837 21:07:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.837 21:07:44 -- common/autotest_common.sh@874 -- # size=4096 00:11:21.837 21:07:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.837 21:07:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:21.837 21:07:44 -- common/autotest_common.sh@877 -- # return 0 00:11:21.837 21:07:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:21.837 21:07:44 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:21.837 21:07:44 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:22.095 21:07:44 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:22.095 { 00:11:22.095 "nbd_device": "/dev/nbd0", 00:11:22.095 "bdev_name": "Malloc0" 00:11:22.095 }, 00:11:22.095 { 00:11:22.095 "nbd_device": "/dev/nbd1", 00:11:22.095 "bdev_name": "Malloc1p0" 00:11:22.095 }, 00:11:22.095 { 00:11:22.095 "nbd_device": "/dev/nbd2", 00:11:22.095 "bdev_name": "Malloc1p1" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd3", 00:11:22.096 "bdev_name": "Malloc2p0" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd4", 00:11:22.096 "bdev_name": "Malloc2p1" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd5", 00:11:22.096 "bdev_name": "Malloc2p2" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd6", 00:11:22.096 "bdev_name": "Malloc2p3" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd7", 00:11:22.096 "bdev_name": "Malloc2p4" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd8", 00:11:22.096 "bdev_name": "Malloc2p5" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd9", 00:11:22.096 "bdev_name": "Malloc2p6" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd10", 00:11:22.096 "bdev_name": "Malloc2p7" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd11", 00:11:22.096 "bdev_name": "TestPT" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd12", 00:11:22.096 "bdev_name": "raid0" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd13", 00:11:22.096 "bdev_name": "concat0" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd14", 00:11:22.096 "bdev_name": "raid1" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd15", 00:11:22.096 "bdev_name": "AIO0" 00:11:22.096 } 00:11:22.096 ]' 00:11:22.096 21:07:44 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:22.096 21:07:44 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd0", 00:11:22.096 "bdev_name": "Malloc0" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd1", 00:11:22.096 "bdev_name": "Malloc1p0" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd2", 00:11:22.096 "bdev_name": "Malloc1p1" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd3", 00:11:22.096 "bdev_name": "Malloc2p0" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd4", 00:11:22.096 "bdev_name": "Malloc2p1" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd5", 00:11:22.096 "bdev_name": "Malloc2p2" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd6", 00:11:22.096 "bdev_name": "Malloc2p3" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd7", 00:11:22.096 "bdev_name": "Malloc2p4" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd8", 00:11:22.096 "bdev_name": "Malloc2p5" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd9", 00:11:22.096 "bdev_name": "Malloc2p6" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd10", 00:11:22.096 "bdev_name": "Malloc2p7" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd11", 00:11:22.096 "bdev_name": "TestPT" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd12", 00:11:22.096 "bdev_name": "raid0" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd13", 00:11:22.096 "bdev_name": "concat0" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd14", 00:11:22.096 "bdev_name": "raid1" 00:11:22.096 }, 00:11:22.096 { 00:11:22.096 "nbd_device": "/dev/nbd15", 00:11:22.096 "bdev_name": "AIO0" 00:11:22.096 } 00:11:22.096 ]' 00:11:22.096 21:07:44 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:22.096 21:07:44 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:11:22.096 21:07:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.096 21:07:44 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:11:22.096 21:07:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:22.096 21:07:44 -- bdev/nbd_common.sh@51 -- # local i 00:11:22.096 21:07:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.096 21:07:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:22.354 21:07:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:22.354 21:07:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:22.354 21:07:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:22.354 21:07:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.354 21:07:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.354 21:07:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:22.354 21:07:44 -- bdev/nbd_common.sh@41 -- # break 00:11:22.354 21:07:44 -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.354 21:07:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.354 21:07:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:22.612 21:07:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:22.612 21:07:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:22.612 21:07:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:22.612 21:07:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.612 21:07:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.612 21:07:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:22.612 21:07:45 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:22.612 21:07:45 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:22.612 21:07:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.612 21:07:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:22.612 21:07:45 -- bdev/nbd_common.sh@41 -- # break 00:11:22.612 21:07:45 -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.612 21:07:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.612 21:07:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:22.869 21:07:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:22.869 21:07:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:22.869 21:07:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:22.869 21:07:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.869 21:07:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.869 21:07:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:22.869 21:07:45 -- bdev/nbd_common.sh@41 -- # break 00:11:22.869 21:07:45 -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.869 21:07:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.869 21:07:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:23.436 21:07:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:23.436 21:07:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:23.436 21:07:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:23.436 21:07:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.436 21:07:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.436 21:07:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:23.436 21:07:45 -- bdev/nbd_common.sh@41 -- # break 00:11:23.436 21:07:45 -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.436 21:07:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.436 21:07:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:23.436 21:07:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:23.436 21:07:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:23.436 21:07:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:23.436 21:07:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.436 21:07:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.436 21:07:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:23.436 21:07:46 -- bdev/nbd_common.sh@41 -- # break 00:11:23.436 21:07:46 -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.436 21:07:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.436 21:07:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@41 -- # break 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:24.003 21:07:46 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:24.261 21:07:46 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:24.261 21:07:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.261 21:07:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:24.261 21:07:46 -- bdev/nbd_common.sh@41 -- # break 00:11:24.261 21:07:46 -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.261 21:07:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.261 21:07:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:24.520 21:07:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:24.520 21:07:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:24.520 21:07:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:24.520 21:07:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.520 21:07:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.520 21:07:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:24.520 21:07:47 -- bdev/nbd_common.sh@41 -- # break 00:11:24.520 21:07:47 -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.520 21:07:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.520 21:07:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:24.778 21:07:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:24.778 21:07:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:24.778 21:07:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:24.778 21:07:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.778 21:07:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.778 21:07:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:24.778 21:07:47 -- bdev/nbd_common.sh@41 -- # break 00:11:24.778 21:07:47 -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.778 21:07:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.778 21:07:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:25.036 21:07:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:25.036 21:07:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:25.036 21:07:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:25.036 21:07:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.036 21:07:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.036 21:07:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:25.036 21:07:47 -- bdev/nbd_common.sh@41 -- # break 00:11:25.036 21:07:47 -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.036 21:07:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.036 21:07:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:25.294 21:07:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:25.294 21:07:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:25.294 21:07:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:25.294 21:07:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.294 21:07:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.294 21:07:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:25.294 21:07:47 -- bdev/nbd_common.sh@41 -- # break 00:11:25.294 21:07:47 -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.294 21:07:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.294 21:07:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:25.553 21:07:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:25.553 21:07:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:25.553 21:07:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:25.553 21:07:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.553 21:07:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.553 21:07:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:25.553 21:07:48 -- bdev/nbd_common.sh@41 -- # break 00:11:25.553 21:07:48 -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.553 21:07:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.553 21:07:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:25.811 21:07:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:25.811 21:07:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:25.811 21:07:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:25.811 21:07:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.811 21:07:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.811 21:07:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:25.811 21:07:48 -- bdev/nbd_common.sh@41 -- # break 00:11:25.811 21:07:48 -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.811 21:07:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.811 21:07:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:26.069 21:07:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:26.069 21:07:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:26.069 21:07:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:26.069 21:07:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.069 21:07:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.069 21:07:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:26.069 21:07:48 -- bdev/nbd_common.sh@41 -- # break 00:11:26.069 21:07:48 -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.069 21:07:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.069 21:07:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:26.328 21:07:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:26.328 21:07:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:26.328 21:07:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:26.328 21:07:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.328 21:07:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.328 21:07:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:26.328 21:07:48 -- bdev/nbd_common.sh@41 -- # break 00:11:26.328 21:07:48 -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.328 21:07:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.328 21:07:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:26.595 21:07:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:26.595 21:07:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:26.595 21:07:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:26.595 21:07:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.595 21:07:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.595 21:07:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:26.595 21:07:49 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:26.595 21:07:49 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:26.595 21:07:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.595 21:07:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:26.595 21:07:49 -- bdev/nbd_common.sh@41 -- # break 00:11:26.595 21:07:49 -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.595 21:07:49 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:26.595 21:07:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.595 21:07:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:26.869 21:07:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:26.869 21:07:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:26.869 21:07:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@65 -- # true 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@65 -- # count=0 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@122 -- # count=0 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@127 -- # return 0 00:11:27.128 21:07:49 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@12 -- # local i 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:27.128 21:07:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:27.386 /dev/nbd0 00:11:27.386 21:07:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:27.386 21:07:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:27.386 21:07:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:11:27.386 21:07:49 -- common/autotest_common.sh@857 -- # local i 00:11:27.386 21:07:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:27.386 21:07:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:27.386 21:07:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:11:27.386 21:07:49 -- common/autotest_common.sh@861 -- # break 00:11:27.386 21:07:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:27.386 21:07:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:27.386 21:07:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.386 1+0 records in 00:11:27.386 1+0 records out 00:11:27.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373828 s, 11.0 MB/s 00:11:27.387 21:07:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.387 21:07:49 -- common/autotest_common.sh@874 -- # size=4096 00:11:27.387 21:07:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.387 21:07:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:27.387 21:07:49 -- common/autotest_common.sh@877 -- # return 0 00:11:27.387 21:07:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.387 21:07:49 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:27.387 21:07:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:11:27.645 /dev/nbd1 00:11:27.645 21:07:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:27.645 21:07:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:27.645 21:07:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:11:27.645 21:07:50 -- common/autotest_common.sh@857 -- # local i 00:11:27.645 21:07:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:27.645 21:07:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:27.645 21:07:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:11:27.645 21:07:50 -- common/autotest_common.sh@861 -- # break 00:11:27.645 21:07:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:27.645 21:07:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:27.645 21:07:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.645 1+0 records in 00:11:27.645 1+0 records out 00:11:27.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412326 s, 9.9 MB/s 00:11:27.645 21:07:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.645 21:07:50 -- common/autotest_common.sh@874 -- # size=4096 00:11:27.645 21:07:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.645 21:07:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:27.645 21:07:50 -- common/autotest_common.sh@877 -- # return 0 00:11:27.645 21:07:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.645 21:07:50 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:27.645 21:07:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:11:27.904 /dev/nbd10 00:11:27.904 21:07:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:27.904 21:07:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:27.904 21:07:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:11:27.904 21:07:50 -- common/autotest_common.sh@857 -- # local i 00:11:27.904 21:07:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:27.904 21:07:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:27.904 21:07:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:11:27.904 21:07:50 -- common/autotest_common.sh@861 -- # break 00:11:27.904 21:07:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:27.904 21:07:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:27.904 21:07:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.904 1+0 records in 00:11:27.904 1+0 records out 00:11:27.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614239 s, 6.7 MB/s 00:11:27.904 21:07:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.904 21:07:50 -- common/autotest_common.sh@874 -- # size=4096 00:11:27.904 21:07:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.904 21:07:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:27.904 21:07:50 -- common/autotest_common.sh@877 -- # return 0 00:11:27.904 21:07:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.904 21:07:50 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:27.904 21:07:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:11:28.171 /dev/nbd11 00:11:28.171 21:07:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:28.171 21:07:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:28.171 21:07:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:11:28.171 21:07:50 -- common/autotest_common.sh@857 -- # local i 00:11:28.171 21:07:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:28.171 21:07:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:28.171 21:07:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:11:28.171 21:07:50 -- common/autotest_common.sh@861 -- # break 00:11:28.171 21:07:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:28.171 21:07:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:28.171 21:07:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.171 1+0 records in 00:11:28.171 1+0 records out 00:11:28.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402212 s, 10.2 MB/s 00:11:28.171 21:07:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.171 21:07:50 -- common/autotest_common.sh@874 -- # size=4096 00:11:28.171 21:07:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.171 21:07:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:28.171 21:07:50 -- common/autotest_common.sh@877 -- # return 0 00:11:28.171 21:07:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.171 21:07:50 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:28.171 21:07:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:11:28.433 /dev/nbd12 00:11:28.433 21:07:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:28.433 21:07:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:28.433 21:07:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:11:28.433 21:07:50 -- common/autotest_common.sh@857 -- # local i 00:11:28.433 21:07:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:28.433 21:07:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:28.433 21:07:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:11:28.433 21:07:50 -- common/autotest_common.sh@861 -- # break 00:11:28.433 21:07:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:28.433 21:07:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:28.433 21:07:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.433 1+0 records in 00:11:28.433 1+0 records out 00:11:28.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364086 s, 11.3 MB/s 00:11:28.433 21:07:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.433 21:07:51 -- common/autotest_common.sh@874 -- # size=4096 00:11:28.433 21:07:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.433 21:07:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:28.433 21:07:51 -- common/autotest_common.sh@877 -- # return 0 00:11:28.433 21:07:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.433 21:07:51 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:28.433 21:07:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:11:28.692 /dev/nbd13 00:11:28.692 21:07:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:28.692 21:07:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:28.692 21:07:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:11:28.692 21:07:51 -- common/autotest_common.sh@857 -- # local i 00:11:28.692 21:07:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:28.692 21:07:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:28.692 21:07:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:11:28.692 21:07:51 -- common/autotest_common.sh@861 -- # break 00:11:28.692 21:07:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:28.692 21:07:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:28.692 21:07:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.692 1+0 records in 00:11:28.692 1+0 records out 00:11:28.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478247 s, 8.6 MB/s 00:11:28.692 21:07:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.692 21:07:51 -- common/autotest_common.sh@874 -- # size=4096 00:11:28.692 21:07:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.692 21:07:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:28.692 21:07:51 -- common/autotest_common.sh@877 -- # return 0 00:11:28.692 21:07:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.692 21:07:51 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:28.692 21:07:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:11:28.951 /dev/nbd14 00:11:28.951 21:07:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:28.951 21:07:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:28.951 21:07:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:11:28.951 21:07:51 -- common/autotest_common.sh@857 -- # local i 00:11:28.951 21:07:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:28.951 21:07:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:28.951 21:07:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:11:28.951 21:07:51 -- common/autotest_common.sh@861 -- # break 00:11:28.951 21:07:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:28.951 21:07:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:28.951 21:07:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.951 1+0 records in 00:11:28.951 1+0 records out 00:11:28.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341305 s, 12.0 MB/s 00:11:28.951 21:07:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.951 21:07:51 -- common/autotest_common.sh@874 -- # size=4096 00:11:28.951 21:07:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.951 21:07:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:28.951 21:07:51 -- common/autotest_common.sh@877 -- # return 0 00:11:28.951 21:07:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.951 21:07:51 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:28.951 21:07:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:11:29.210 /dev/nbd15 00:11:29.210 21:07:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:11:29.210 21:07:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:11:29.210 21:07:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:11:29.210 21:07:51 -- common/autotest_common.sh@857 -- # local i 00:11:29.210 21:07:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:29.210 21:07:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:29.210 21:07:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:11:29.210 21:07:51 -- common/autotest_common.sh@861 -- # break 00:11:29.210 21:07:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:29.210 21:07:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:29.210 21:07:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.210 1+0 records in 00:11:29.210 1+0 records out 00:11:29.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459419 s, 8.9 MB/s 00:11:29.210 21:07:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.210 21:07:51 -- common/autotest_common.sh@874 -- # size=4096 00:11:29.210 21:07:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.210 21:07:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:29.210 21:07:51 -- common/autotest_common.sh@877 -- # return 0 00:11:29.210 21:07:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.210 21:07:51 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:29.210 21:07:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:11:29.469 /dev/nbd2 00:11:29.469 21:07:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:11:29.469 21:07:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:11:29.469 21:07:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:11:29.469 21:07:52 -- common/autotest_common.sh@857 -- # local i 00:11:29.469 21:07:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:29.469 21:07:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:29.469 21:07:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:11:29.469 21:07:52 -- common/autotest_common.sh@861 -- # break 00:11:29.469 21:07:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:29.469 21:07:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:29.469 21:07:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.469 1+0 records in 00:11:29.469 1+0 records out 00:11:29.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689339 s, 5.9 MB/s 00:11:29.469 21:07:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.469 21:07:52 -- common/autotest_common.sh@874 -- # size=4096 00:11:29.469 21:07:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.469 21:07:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:29.469 21:07:52 -- common/autotest_common.sh@877 -- # return 0 00:11:29.469 21:07:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.469 21:07:52 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:29.469 21:07:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:11:29.728 /dev/nbd3 00:11:29.728 21:07:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:11:29.728 21:07:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:11:29.728 21:07:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:11:29.728 21:07:52 -- common/autotest_common.sh@857 -- # local i 00:11:29.728 21:07:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:29.728 21:07:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:29.728 21:07:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:11:29.728 21:07:52 -- common/autotest_common.sh@861 -- # break 00:11:29.728 21:07:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:29.728 21:07:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:29.728 21:07:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.728 1+0 records in 00:11:29.728 1+0 records out 00:11:29.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585444 s, 7.0 MB/s 00:11:29.728 21:07:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.728 21:07:52 -- common/autotest_common.sh@874 -- # size=4096 00:11:29.728 21:07:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.728 21:07:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:29.728 21:07:52 -- common/autotest_common.sh@877 -- # return 0 00:11:29.728 21:07:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.728 21:07:52 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:29.728 21:07:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:11:30.294 /dev/nbd4 00:11:30.294 21:07:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:11:30.294 21:07:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:11:30.294 21:07:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:11:30.294 21:07:52 -- common/autotest_common.sh@857 -- # local i 00:11:30.294 21:07:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:30.294 21:07:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:30.294 21:07:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:11:30.294 21:07:52 -- common/autotest_common.sh@861 -- # break 00:11:30.294 21:07:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:30.294 21:07:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:30.294 21:07:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.294 1+0 records in 00:11:30.294 1+0 records out 00:11:30.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582296 s, 7.0 MB/s 00:11:30.294 21:07:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.294 21:07:52 -- common/autotest_common.sh@874 -- # size=4096 00:11:30.294 21:07:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.294 21:07:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:30.294 21:07:52 -- common/autotest_common.sh@877 -- # return 0 00:11:30.294 21:07:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.294 21:07:52 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:30.294 21:07:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:11:30.557 /dev/nbd5 00:11:30.557 21:07:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:11:30.557 21:07:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:11:30.557 21:07:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:11:30.557 21:07:53 -- common/autotest_common.sh@857 -- # local i 00:11:30.557 21:07:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:30.557 21:07:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:30.557 21:07:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:11:30.557 21:07:53 -- common/autotest_common.sh@861 -- # break 00:11:30.557 21:07:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:30.557 21:07:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:30.557 21:07:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.557 1+0 records in 00:11:30.557 1+0 records out 00:11:30.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000893705 s, 4.6 MB/s 00:11:30.557 21:07:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.557 21:07:53 -- common/autotest_common.sh@874 -- # size=4096 00:11:30.557 21:07:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.557 21:07:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:30.557 21:07:53 -- common/autotest_common.sh@877 -- # return 0 00:11:30.557 21:07:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.557 21:07:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:30.557 21:07:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:11:30.815 /dev/nbd6 00:11:30.815 21:07:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:11:30.815 21:07:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:11:30.815 21:07:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:11:30.815 21:07:53 -- common/autotest_common.sh@857 -- # local i 00:11:30.815 21:07:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:30.815 21:07:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:30.815 21:07:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:11:30.815 21:07:53 -- common/autotest_common.sh@861 -- # break 00:11:30.815 21:07:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:30.815 21:07:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:30.815 21:07:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.815 1+0 records in 00:11:30.815 1+0 records out 00:11:30.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573741 s, 7.1 MB/s 00:11:30.815 21:07:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.815 21:07:53 -- common/autotest_common.sh@874 -- # size=4096 00:11:30.815 21:07:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.815 21:07:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:30.815 21:07:53 -- common/autotest_common.sh@877 -- # return 0 00:11:30.815 21:07:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.815 21:07:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:30.815 21:07:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:11:31.073 /dev/nbd7 00:11:31.073 21:07:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:11:31.073 21:07:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:11:31.073 21:07:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:11:31.073 21:07:53 -- common/autotest_common.sh@857 -- # local i 00:11:31.073 21:07:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:31.073 21:07:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:31.073 21:07:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:11:31.073 21:07:53 -- common/autotest_common.sh@861 -- # break 00:11:31.073 21:07:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:31.073 21:07:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:31.073 21:07:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.073 1+0 records in 00:11:31.073 1+0 records out 00:11:31.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776991 s, 5.3 MB/s 00:11:31.073 21:07:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.073 21:07:53 -- common/autotest_common.sh@874 -- # size=4096 00:11:31.073 21:07:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.073 21:07:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:31.073 21:07:53 -- common/autotest_common.sh@877 -- # return 0 00:11:31.073 21:07:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.073 21:07:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:31.073 21:07:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:11:31.332 /dev/nbd8 00:11:31.332 21:07:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:11:31.332 21:07:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:11:31.332 21:07:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:11:31.332 21:07:53 -- common/autotest_common.sh@857 -- # local i 00:11:31.332 21:07:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:31.332 21:07:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:31.332 21:07:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:11:31.332 21:07:53 -- common/autotest_common.sh@861 -- # break 00:11:31.332 21:07:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:31.332 21:07:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:31.332 21:07:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.332 1+0 records in 00:11:31.332 1+0 records out 00:11:31.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557686 s, 7.3 MB/s 00:11:31.332 21:07:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.332 21:07:53 -- common/autotest_common.sh@874 -- # size=4096 00:11:31.332 21:07:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.332 21:07:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:31.332 21:07:53 -- common/autotest_common.sh@877 -- # return 0 00:11:31.332 21:07:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.332 21:07:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:31.332 21:07:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:11:31.591 /dev/nbd9 00:11:31.591 21:07:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:11:31.591 21:07:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:11:31.591 21:07:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:11:31.591 21:07:54 -- common/autotest_common.sh@857 -- # local i 00:11:31.591 21:07:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:31.591 21:07:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:31.591 21:07:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:11:31.591 21:07:54 -- common/autotest_common.sh@861 -- # break 00:11:31.591 21:07:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:31.591 21:07:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:31.592 21:07:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.592 1+0 records in 00:11:31.592 1+0 records out 00:11:31.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114 s, 3.6 MB/s 00:11:31.592 21:07:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.592 21:07:54 -- common/autotest_common.sh@874 -- # size=4096 00:11:31.592 21:07:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.592 21:07:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:31.592 21:07:54 -- common/autotest_common.sh@877 -- # return 0 00:11:31.592 21:07:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.592 21:07:54 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:31.592 21:07:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:31.592 21:07:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.592 21:07:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:31.850 21:07:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:31.850 { 00:11:31.850 "nbd_device": "/dev/nbd0", 00:11:31.850 "bdev_name": "Malloc0" 00:11:31.850 }, 00:11:31.850 { 00:11:31.850 "nbd_device": "/dev/nbd1", 00:11:31.850 "bdev_name": "Malloc1p0" 00:11:31.850 }, 00:11:31.850 { 00:11:31.850 "nbd_device": "/dev/nbd10", 00:11:31.850 "bdev_name": "Malloc1p1" 00:11:31.850 }, 00:11:31.850 { 00:11:31.850 "nbd_device": "/dev/nbd11", 00:11:31.850 "bdev_name": "Malloc2p0" 00:11:31.850 }, 00:11:31.850 { 00:11:31.850 "nbd_device": "/dev/nbd12", 00:11:31.850 "bdev_name": "Malloc2p1" 00:11:31.850 }, 00:11:31.850 { 00:11:31.850 "nbd_device": "/dev/nbd13", 00:11:31.850 "bdev_name": "Malloc2p2" 00:11:31.850 }, 00:11:31.850 { 00:11:31.850 "nbd_device": "/dev/nbd14", 00:11:31.850 "bdev_name": "Malloc2p3" 00:11:31.850 }, 00:11:31.850 { 00:11:31.850 "nbd_device": "/dev/nbd15", 00:11:31.850 "bdev_name": "Malloc2p4" 00:11:31.850 }, 00:11:31.850 { 00:11:31.850 "nbd_device": "/dev/nbd2", 00:11:31.851 "bdev_name": "Malloc2p5" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd3", 00:11:31.851 "bdev_name": "Malloc2p6" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd4", 00:11:31.851 "bdev_name": "Malloc2p7" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd5", 00:11:31.851 "bdev_name": "TestPT" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd6", 00:11:31.851 "bdev_name": "raid0" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd7", 00:11:31.851 "bdev_name": "concat0" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd8", 00:11:31.851 "bdev_name": "raid1" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd9", 00:11:31.851 "bdev_name": "AIO0" 00:11:31.851 } 00:11:31.851 ]' 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd0", 00:11:31.851 "bdev_name": "Malloc0" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd1", 00:11:31.851 "bdev_name": "Malloc1p0" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd10", 00:11:31.851 "bdev_name": "Malloc1p1" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd11", 00:11:31.851 "bdev_name": "Malloc2p0" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd12", 00:11:31.851 "bdev_name": "Malloc2p1" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd13", 00:11:31.851 "bdev_name": "Malloc2p2" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd14", 00:11:31.851 "bdev_name": "Malloc2p3" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd15", 00:11:31.851 "bdev_name": "Malloc2p4" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd2", 00:11:31.851 "bdev_name": "Malloc2p5" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd3", 00:11:31.851 "bdev_name": "Malloc2p6" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd4", 00:11:31.851 "bdev_name": "Malloc2p7" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd5", 00:11:31.851 "bdev_name": "TestPT" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd6", 00:11:31.851 "bdev_name": "raid0" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd7", 00:11:31.851 "bdev_name": "concat0" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd8", 00:11:31.851 "bdev_name": "raid1" 00:11:31.851 }, 00:11:31.851 { 00:11:31.851 "nbd_device": "/dev/nbd9", 00:11:31.851 "bdev_name": "AIO0" 00:11:31.851 } 00:11:31.851 ]' 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:31.851 /dev/nbd1 00:11:31.851 /dev/nbd10 00:11:31.851 /dev/nbd11 00:11:31.851 /dev/nbd12 00:11:31.851 /dev/nbd13 00:11:31.851 /dev/nbd14 00:11:31.851 /dev/nbd15 00:11:31.851 /dev/nbd2 00:11:31.851 /dev/nbd3 00:11:31.851 /dev/nbd4 00:11:31.851 /dev/nbd5 00:11:31.851 /dev/nbd6 00:11:31.851 /dev/nbd7 00:11:31.851 /dev/nbd8 00:11:31.851 /dev/nbd9' 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:31.851 /dev/nbd1 00:11:31.851 /dev/nbd10 00:11:31.851 /dev/nbd11 00:11:31.851 /dev/nbd12 00:11:31.851 /dev/nbd13 00:11:31.851 /dev/nbd14 00:11:31.851 /dev/nbd15 00:11:31.851 /dev/nbd2 00:11:31.851 /dev/nbd3 00:11:31.851 /dev/nbd4 00:11:31.851 /dev/nbd5 00:11:31.851 /dev/nbd6 00:11:31.851 /dev/nbd7 00:11:31.851 /dev/nbd8 00:11:31.851 /dev/nbd9' 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@65 -- # count=16 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@66 -- # echo 16 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@95 -- # count=16 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:31.851 256+0 records in 00:11:31.851 256+0 records out 00:11:31.851 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105906 s, 99.0 MB/s 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.851 21:07:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:32.110 256+0 records in 00:11:32.110 256+0 records out 00:11:32.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158617 s, 6.6 MB/s 00:11:32.110 21:07:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.110 21:07:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:32.369 256+0 records in 00:11:32.369 256+0 records out 00:11:32.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.193743 s, 5.4 MB/s 00:11:32.369 21:07:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.369 21:07:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:32.628 256+0 records in 00:11:32.628 256+0 records out 00:11:32.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180321 s, 5.8 MB/s 00:11:32.628 21:07:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.628 21:07:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:32.628 256+0 records in 00:11:32.628 256+0 records out 00:11:32.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168354 s, 6.2 MB/s 00:11:32.628 21:07:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.628 21:07:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:32.887 256+0 records in 00:11:32.887 256+0 records out 00:11:32.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162915 s, 6.4 MB/s 00:11:32.887 21:07:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.887 21:07:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:32.887 256+0 records in 00:11:32.887 256+0 records out 00:11:32.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165187 s, 6.3 MB/s 00:11:32.887 21:07:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.887 21:07:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:33.146 256+0 records in 00:11:33.146 256+0 records out 00:11:33.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164655 s, 6.4 MB/s 00:11:33.146 21:07:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.146 21:07:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:11:33.418 256+0 records in 00:11:33.418 256+0 records out 00:11:33.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161886 s, 6.5 MB/s 00:11:33.418 21:07:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.418 21:07:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:11:33.418 256+0 records in 00:11:33.418 256+0 records out 00:11:33.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1649 s, 6.4 MB/s 00:11:33.418 21:07:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.418 21:07:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:11:33.689 256+0 records in 00:11:33.689 256+0 records out 00:11:33.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159885 s, 6.6 MB/s 00:11:33.689 21:07:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.689 21:07:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:11:33.948 256+0 records in 00:11:33.948 256+0 records out 00:11:33.948 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155811 s, 6.7 MB/s 00:11:33.948 21:07:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.948 21:07:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:11:33.948 256+0 records in 00:11:33.948 256+0 records out 00:11:33.948 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161609 s, 6.5 MB/s 00:11:33.949 21:07:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.949 21:07:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:11:34.207 256+0 records in 00:11:34.207 256+0 records out 00:11:34.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16515 s, 6.3 MB/s 00:11:34.207 21:07:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:34.207 21:07:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:11:34.207 256+0 records in 00:11:34.207 256+0 records out 00:11:34.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155247 s, 6.8 MB/s 00:11:34.207 21:07:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:34.207 21:07:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:11:34.466 256+0 records in 00:11:34.466 256+0 records out 00:11:34.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.20207 s, 5.2 MB/s 00:11:34.466 21:07:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:34.466 21:07:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:11:34.725 256+0 records in 00:11:34.725 256+0 records out 00:11:34.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239268 s, 4.4 MB/s 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.725 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:34.726 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.726 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:11:34.726 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.726 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:11:34.726 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.726 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:11:34.726 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.726 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:11:34.726 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.726 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:11:34.984 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.985 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:11:34.985 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.985 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:11:34.985 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.985 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:11:34.985 21:07:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.985 21:07:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:11:34.985 21:07:57 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:34.985 21:07:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:34.985 21:07:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:34.985 21:07:57 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:11:34.985 21:07:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:34.985 21:07:57 -- bdev/nbd_common.sh@51 -- # local i 00:11:34.985 21:07:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.985 21:07:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:35.244 21:07:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:35.244 21:07:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:35.244 21:07:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:35.244 21:07:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.244 21:07:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.244 21:07:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:35.244 21:07:57 -- bdev/nbd_common.sh@41 -- # break 00:11:35.244 21:07:57 -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.244 21:07:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.244 21:07:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:35.502 21:07:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:35.502 21:07:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:35.502 21:07:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:35.502 21:07:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.502 21:07:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.502 21:07:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:35.502 21:07:57 -- bdev/nbd_common.sh@41 -- # break 00:11:35.502 21:07:57 -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.502 21:07:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.502 21:07:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:35.761 21:07:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:35.761 21:07:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:35.761 21:07:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:35.761 21:07:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.761 21:07:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.761 21:07:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:35.761 21:07:58 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:35.761 21:07:58 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:35.761 21:07:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.761 21:07:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:35.761 21:07:58 -- bdev/nbd_common.sh@41 -- # break 00:11:35.761 21:07:58 -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.761 21:07:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.761 21:07:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:36.020 21:07:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:36.020 21:07:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:36.020 21:07:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:36.020 21:07:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.020 21:07:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.020 21:07:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:36.020 21:07:58 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:36.020 21:07:58 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:36.020 21:07:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.020 21:07:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:36.020 21:07:58 -- bdev/nbd_common.sh@41 -- # break 00:11:36.020 21:07:58 -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.020 21:07:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.020 21:07:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:36.279 21:07:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:36.279 21:07:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:36.279 21:07:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:36.279 21:07:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.279 21:07:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.279 21:07:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:36.279 21:07:58 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:36.538 21:07:58 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:36.538 21:07:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.538 21:07:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:36.538 21:07:58 -- bdev/nbd_common.sh@41 -- # break 00:11:36.538 21:07:58 -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.538 21:07:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.538 21:07:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:36.796 21:07:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:36.796 21:07:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:36.796 21:07:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:36.796 21:07:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.796 21:07:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.796 21:07:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:36.796 21:07:59 -- bdev/nbd_common.sh@41 -- # break 00:11:36.796 21:07:59 -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.796 21:07:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.796 21:07:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:37.055 21:07:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:37.055 21:07:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:37.055 21:07:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:37.055 21:07:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.055 21:07:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.055 21:07:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:37.055 21:07:59 -- bdev/nbd_common.sh@41 -- # break 00:11:37.055 21:07:59 -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.055 21:07:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.055 21:07:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:37.317 21:07:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:37.317 21:07:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:37.317 21:07:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:37.317 21:07:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.317 21:07:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.317 21:07:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:37.317 21:07:59 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:37.317 21:07:59 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:37.317 21:07:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.317 21:07:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:37.317 21:07:59 -- bdev/nbd_common.sh@41 -- # break 00:11:37.317 21:07:59 -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.317 21:07:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.317 21:07:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:37.576 21:08:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:37.576 21:08:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:37.576 21:08:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:37.576 21:08:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.576 21:08:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.576 21:08:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:37.576 21:08:00 -- bdev/nbd_common.sh@41 -- # break 00:11:37.576 21:08:00 -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.576 21:08:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.576 21:08:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:37.835 21:08:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:37.835 21:08:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:37.835 21:08:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:37.835 21:08:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.835 21:08:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.835 21:08:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:37.835 21:08:00 -- bdev/nbd_common.sh@41 -- # break 00:11:37.835 21:08:00 -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.835 21:08:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.835 21:08:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:38.094 21:08:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:38.094 21:08:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:38.094 21:08:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:38.094 21:08:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.094 21:08:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.094 21:08:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:38.094 21:08:00 -- bdev/nbd_common.sh@41 -- # break 00:11:38.094 21:08:00 -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.094 21:08:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.094 21:08:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:38.353 21:08:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:38.353 21:08:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:38.353 21:08:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:38.353 21:08:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.353 21:08:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.353 21:08:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:38.353 21:08:00 -- bdev/nbd_common.sh@41 -- # break 00:11:38.353 21:08:00 -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.353 21:08:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.353 21:08:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:38.612 21:08:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:38.612 21:08:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:38.612 21:08:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:38.612 21:08:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.612 21:08:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.612 21:08:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:38.612 21:08:01 -- bdev/nbd_common.sh@41 -- # break 00:11:38.612 21:08:01 -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.612 21:08:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.612 21:08:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:38.871 21:08:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:38.871 21:08:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:38.871 21:08:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:38.871 21:08:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.871 21:08:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.871 21:08:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:38.871 21:08:01 -- bdev/nbd_common.sh@41 -- # break 00:11:38.871 21:08:01 -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.871 21:08:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.871 21:08:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:39.130 21:08:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:39.130 21:08:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:39.130 21:08:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:39.130 21:08:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.130 21:08:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.130 21:08:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:39.130 21:08:01 -- bdev/nbd_common.sh@41 -- # break 00:11:39.130 21:08:01 -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.130 21:08:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.130 21:08:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:39.389 21:08:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:39.389 21:08:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:39.389 21:08:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:39.389 21:08:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.389 21:08:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.389 21:08:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:39.389 21:08:01 -- bdev/nbd_common.sh@41 -- # break 00:11:39.389 21:08:01 -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.389 21:08:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:39.389 21:08:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:39.389 21:08:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:39.647 21:08:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:39.647 21:08:02 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:39.647 21:08:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:39.648 21:08:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:39.648 21:08:02 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:39.648 21:08:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:39.648 21:08:02 -- bdev/nbd_common.sh@65 -- # true 00:11:39.648 21:08:02 -- bdev/nbd_common.sh@65 -- # count=0 00:11:39.648 21:08:02 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:39.648 21:08:02 -- bdev/nbd_common.sh@104 -- # count=0 00:11:39.648 21:08:02 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:39.648 21:08:02 -- bdev/nbd_common.sh@109 -- # return 0 00:11:39.648 21:08:02 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:39.648 21:08:02 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:39.648 21:08:02 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:11:39.648 21:08:02 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:39.648 21:08:02 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:39.648 21:08:02 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:39.906 malloc_lvol_verify 00:11:39.906 21:08:02 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:40.165 feef8024-2eb2-49c6-b227-8ae41eeccf2b 00:11:40.165 21:08:02 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:40.424 a8b822fd-dd95-4e0d-930c-8ab09654b3ad 00:11:40.424 21:08:02 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:40.682 /dev/nbd0 00:11:40.682 21:08:03 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:40.682 mke2fs 1.45.5 (07-Jan-2020) 00:11:40.682 00:11:40.682 Filesystem too small for a journal 00:11:40.682 Creating filesystem with 1024 4k blocks and 1024 inodes 00:11:40.682 00:11:40.682 Allocating group tables: 0/1 done 00:11:40.682 Writing inode tables: 0/1 done 00:11:40.682 Writing superblocks and filesystem accounting information: 0/1 done 00:11:40.682 00:11:40.682 21:08:03 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:40.682 21:08:03 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:40.682 21:08:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:40.682 21:08:03 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:11:40.682 21:08:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:40.682 21:08:03 -- bdev/nbd_common.sh@51 -- # local i 00:11:40.682 21:08:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:40.682 21:08:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:40.939 21:08:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:40.939 21:08:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:40.939 21:08:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:40.939 21:08:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:40.939 21:08:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:40.939 21:08:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:40.939 21:08:03 -- bdev/nbd_common.sh@41 -- # break 00:11:40.940 21:08:03 -- bdev/nbd_common.sh@45 -- # return 0 00:11:40.940 21:08:03 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:40.940 21:08:03 -- bdev/nbd_common.sh@147 -- # return 0 00:11:40.940 21:08:03 -- bdev/blockdev.sh@324 -- # killprocess 121927 00:11:40.940 21:08:03 -- common/autotest_common.sh@926 -- # '[' -z 121927 ']' 00:11:40.940 21:08:03 -- common/autotest_common.sh@930 -- # kill -0 121927 00:11:40.940 21:08:03 -- common/autotest_common.sh@931 -- # uname 00:11:40.940 21:08:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:40.940 21:08:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121927 00:11:40.940 killing process with pid 121927 00:11:40.940 21:08:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:40.940 21:08:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:40.940 21:08:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121927' 00:11:40.940 21:08:03 -- common/autotest_common.sh@945 -- # kill 121927 00:11:40.940 21:08:03 -- common/autotest_common.sh@950 -- # wait 121927 00:11:41.507 ************************************ 00:11:41.507 END TEST bdev_nbd 00:11:41.507 ************************************ 00:11:41.507 21:08:03 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:11:41.507 00:11:41.507 real 0m25.149s 00:11:41.507 user 0m34.705s 00:11:41.507 sys 0m9.436s 00:11:41.507 21:08:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.507 21:08:03 -- common/autotest_common.sh@10 -- # set +x 00:11:41.507 21:08:03 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:11:41.507 21:08:03 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:11:41.507 21:08:03 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:11:41.507 21:08:03 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:11:41.507 21:08:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:41.507 21:08:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:41.507 21:08:03 -- common/autotest_common.sh@10 -- # set +x 00:11:41.507 ************************************ 00:11:41.507 START TEST bdev_fio 00:11:41.507 ************************************ 00:11:41.507 21:08:03 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:11:41.507 21:08:03 -- bdev/blockdev.sh@329 -- # local env_context 00:11:41.507 21:08:03 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:11:41.507 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:11:41.507 21:08:03 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:41.507 21:08:03 -- bdev/blockdev.sh@337 -- # echo '' 00:11:41.507 21:08:03 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:11:41.507 21:08:03 -- bdev/blockdev.sh@337 -- # env_context= 00:11:41.507 21:08:03 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:11:41.507 21:08:03 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:41.507 21:08:03 -- common/autotest_common.sh@1260 -- # local workload=verify 00:11:41.507 21:08:03 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:11:41.507 21:08:03 -- common/autotest_common.sh@1262 -- # local env_context= 00:11:41.507 21:08:03 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:11:41.507 21:08:03 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:41.507 21:08:03 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:11:41.507 21:08:03 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:11:41.507 21:08:03 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:41.507 21:08:03 -- common/autotest_common.sh@1280 -- # cat 00:11:41.507 21:08:03 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:11:41.507 21:08:03 -- common/autotest_common.sh@1293 -- # cat 00:11:41.507 21:08:03 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:11:41.507 21:08:03 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:11:41.507 21:08:04 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:41.507 21:08:04 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:11:41.507 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.507 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:11:41.507 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:11:41.507 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.507 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:11:41.507 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:11:41.507 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.507 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:11:41.507 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:11:41.507 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.507 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:11:41.507 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:11:41.507 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.507 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:11:41.507 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:11:41.507 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.507 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:11:41.507 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:11:41.507 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.507 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:11:41.508 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:11:41.508 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.508 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:11:41.508 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:11:41.508 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.508 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:11:41.508 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:11:41.508 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.508 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:11:41.508 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:11:41.508 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.508 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:11:41.508 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:11:41.508 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.508 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:11:41.508 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:11:41.508 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.508 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:11:41.508 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:11:41.508 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.508 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:11:41.508 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:11:41.508 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.508 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:11:41.508 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:11:41.508 21:08:04 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:41.508 21:08:04 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:11:41.508 21:08:04 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:11:41.508 21:08:04 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:11:41.508 21:08:04 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:41.508 21:08:04 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:41.508 21:08:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:41.508 21:08:04 -- common/autotest_common.sh@10 -- # set +x 00:11:41.508 ************************************ 00:11:41.508 START TEST bdev_fio_rw_verify 00:11:41.508 ************************************ 00:11:41.508 21:08:04 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:41.508 21:08:04 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:41.508 21:08:04 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:11:41.508 21:08:04 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:11:41.508 21:08:04 -- common/autotest_common.sh@1318 -- # local sanitizers 00:11:41.508 21:08:04 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:41.508 21:08:04 -- common/autotest_common.sh@1320 -- # shift 00:11:41.508 21:08:04 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:11:41.508 21:08:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:41.508 21:08:04 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:41.508 21:08:04 -- common/autotest_common.sh@1324 -- # grep libasan 00:11:41.508 21:08:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:41.508 21:08:04 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:11:41.508 21:08:04 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:11:41.508 21:08:04 -- common/autotest_common.sh@1326 -- # break 00:11:41.508 21:08:04 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:41.508 21:08:04 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:41.767 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.767 fio-3.35 00:11:41.767 Starting 16 threads 00:11:53.966 00:11:53.966 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=123158: Fri Jun 7 21:08:15 2024 00:11:53.966 read: IOPS=85.3k, BW=333MiB/s (349MB/s)(3332MiB/10001msec) 00:11:53.966 slat (usec): min=2, max=41998, avg=31.50, stdev=381.63 00:11:53.966 clat (usec): min=10, max=42243, avg=259.26, stdev=1145.98 00:11:53.966 lat (usec): min=27, max=42267, avg=290.76, stdev=1207.17 00:11:53.966 clat percentiles (usec): 00:11:53.966 | 50.000th=[ 157], 99.000th=[ 701], 99.900th=[16319], 99.990th=[24249], 00:11:53.966 | 99.999th=[36439] 00:11:53.966 write: IOPS=137k, BW=536MiB/s (562MB/s)(5293MiB/9869msec); 0 zone resets 00:11:53.966 slat (usec): min=6, max=54084, avg=60.03, stdev=577.41 00:11:53.966 clat (usec): min=8, max=54362, avg=339.40, stdev=1338.57 00:11:53.966 lat (usec): min=36, max=54392, avg=399.43, stdev=1457.66 00:11:53.966 clat percentiles (usec): 00:11:53.966 | 50.000th=[ 202], 99.000th=[ 3916], 99.900th=[16319], 99.990th=[31589], 00:11:53.966 | 99.999th=[48497] 00:11:53.966 bw ( KiB/s): min=337056, max=856102, per=98.63%, avg=541650.53, stdev=9295.60, samples=304 00:11:53.966 iops : min=84264, max=214025, avg=135412.42, stdev=2323.90, samples=304 00:11:53.966 lat (usec) : 10=0.01%, 20=0.01%, 50=0.80%, 100=15.36%, 250=60.16% 00:11:53.966 lat (usec) : 500=20.25%, 750=1.87%, 1000=0.38% 00:11:53.966 lat (msec) : 2=0.19%, 4=0.10%, 10=0.33%, 20=0.50%, 50=0.05% 00:11:53.966 lat (msec) : 100=0.01% 00:11:53.966 cpu : usr=57.87%, sys=2.02%, ctx=230500, majf=0, minf=104095 00:11:53.966 IO depths : 1=11.6%, 2=23.9%, 4=51.5%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:53.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.966 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.966 issued rwts: total=853091,1354922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.966 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:53.966 00:11:53.966 Run status group 0 (all jobs): 00:11:53.966 READ: bw=333MiB/s (349MB/s), 333MiB/s-333MiB/s (349MB/s-349MB/s), io=3332MiB (3494MB), run=10001-10001msec 00:11:53.966 WRITE: bw=536MiB/s (562MB/s), 536MiB/s-536MiB/s (562MB/s-562MB/s), io=5293MiB (5550MB), run=9869-9869msec 00:11:53.966 ----------------------------------------------------- 00:11:53.966 Suppressions used: 00:11:53.966 count bytes template 00:11:53.966 16 140 /usr/src/fio/parse.c 00:11:53.966 11514 1105344 /usr/src/fio/iolog.c 00:11:53.966 2 596 libcrypto.so 00:11:53.966 ----------------------------------------------------- 00:11:53.966 00:11:53.966 ************************************ 00:11:53.966 END TEST bdev_fio_rw_verify 00:11:53.966 ************************************ 00:11:53.966 00:11:53.966 real 0m12.019s 00:11:53.966 user 1m35.471s 00:11:53.966 sys 0m4.104s 00:11:53.966 21:08:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.966 21:08:16 -- common/autotest_common.sh@10 -- # set +x 00:11:53.966 21:08:16 -- bdev/blockdev.sh@348 -- # rm -f 00:11:53.966 21:08:16 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:53.966 21:08:16 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:11:53.966 21:08:16 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:53.966 21:08:16 -- common/autotest_common.sh@1260 -- # local workload=trim 00:11:53.966 21:08:16 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:11:53.966 21:08:16 -- common/autotest_common.sh@1262 -- # local env_context= 00:11:53.966 21:08:16 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:11:53.966 21:08:16 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:53.966 21:08:16 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:11:53.966 21:08:16 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:11:53.966 21:08:16 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:53.966 21:08:16 -- common/autotest_common.sh@1280 -- # cat 00:11:53.966 21:08:16 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:11:53.966 21:08:16 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:11:53.966 21:08:16 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:11:53.966 21:08:16 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:53.967 21:08:16 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "6f5f75d9-2055-4154-b9b5-b34a803a6b5a"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "6f5f75d9-2055-4154-b9b5-b34a803a6b5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "5b783316-67eb-5541-951e-09cdfd0736d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5b783316-67eb-5541-951e-09cdfd0736d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "536a5e68-b353-5a08-98bc-1b8660082e93"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "536a5e68-b353-5a08-98bc-1b8660082e93",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "9df9b6b6-e0ea-5bd3-84c9-60add694c1e6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9df9b6b6-e0ea-5bd3-84c9-60add694c1e6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "8bca855e-87c3-5f70-9baf-6e58800d7517"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8bca855e-87c3-5f70-9baf-6e58800d7517",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c5478617-4dbb-569a-b447-b81a3cca53f7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c5478617-4dbb-569a-b447-b81a3cca53f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "5e8150f8-8d06-56c6-8b76-3886d087d31c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5e8150f8-8d06-56c6-8b76-3886d087d31c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "9223a87a-36ec-546e-bc8f-ffe39504c823"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9223a87a-36ec-546e-bc8f-ffe39504c823",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "dfe5af31-dec6-5b70-9e7d-1fe24d12d8c9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dfe5af31-dec6-5b70-9e7d-1fe24d12d8c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "e92f23b8-c09a-5805-88bb-d7eecf1126d7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e92f23b8-c09a-5805-88bb-d7eecf1126d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "fc1d0117-93f7-50a0-a73f-247f21d713cf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fc1d0117-93f7-50a0-a73f-247f21d713cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "fd5616a2-eb8f-5af6-9d0e-030f2a25da4b"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fd5616a2-eb8f-5af6-9d0e-030f2a25da4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "7d952d80-0b61-464b-9378-59fede82d8ca"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7d952d80-0b61-464b-9378-59fede82d8ca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7d952d80-0b61-464b-9378-59fede82d8ca",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a308ada7-81f0-4517-ac19-0be2c35eb139",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "44450791-1194-4e87-8fd0-226c7e312954",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "52005f19-ea6c-4eba-8025-3834e999df3a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "52005f19-ea6c-4eba-8025-3834e999df3a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "52005f19-ea6c-4eba-8025-3834e999df3a",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "31321730-cdbf-4756-9cea-6901f74ad6d8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "fff1b05b-9056-4c41-b695-1e0478979683",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "bdb83198-bb51-42ac-a995-606e21c25f35"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bdb83198-bb51-42ac-a995-606e21c25f35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "bdb83198-bb51-42ac-a995-606e21c25f35",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "6b1c1a61-e61b-4191-9386-c88d8ea6e21f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "3e8db668-9c30-4c46-9a65-1549532a05e2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "5806f2a7-94ff-49c5-89f6-9db788759411"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "5806f2a7-94ff-49c5-89f6-9db788759411",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:11:53.967 21:08:16 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:11:53.967 Malloc1p0 00:11:53.967 Malloc1p1 00:11:53.967 Malloc2p0 00:11:53.967 Malloc2p1 00:11:53.967 Malloc2p2 00:11:53.967 Malloc2p3 00:11:53.967 Malloc2p4 00:11:53.967 Malloc2p5 00:11:53.967 Malloc2p6 00:11:53.967 Malloc2p7 00:11:53.967 TestPT 00:11:53.967 raid0 00:11:53.967 concat0 ]] 00:11:53.967 21:08:16 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:53.968 21:08:16 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "6f5f75d9-2055-4154-b9b5-b34a803a6b5a"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "6f5f75d9-2055-4154-b9b5-b34a803a6b5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "5b783316-67eb-5541-951e-09cdfd0736d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5b783316-67eb-5541-951e-09cdfd0736d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "536a5e68-b353-5a08-98bc-1b8660082e93"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "536a5e68-b353-5a08-98bc-1b8660082e93",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "9df9b6b6-e0ea-5bd3-84c9-60add694c1e6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9df9b6b6-e0ea-5bd3-84c9-60add694c1e6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "8bca855e-87c3-5f70-9baf-6e58800d7517"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8bca855e-87c3-5f70-9baf-6e58800d7517",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "c5478617-4dbb-569a-b447-b81a3cca53f7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c5478617-4dbb-569a-b447-b81a3cca53f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "5e8150f8-8d06-56c6-8b76-3886d087d31c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5e8150f8-8d06-56c6-8b76-3886d087d31c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "9223a87a-36ec-546e-bc8f-ffe39504c823"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9223a87a-36ec-546e-bc8f-ffe39504c823",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "dfe5af31-dec6-5b70-9e7d-1fe24d12d8c9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dfe5af31-dec6-5b70-9e7d-1fe24d12d8c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "e92f23b8-c09a-5805-88bb-d7eecf1126d7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e92f23b8-c09a-5805-88bb-d7eecf1126d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "fc1d0117-93f7-50a0-a73f-247f21d713cf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fc1d0117-93f7-50a0-a73f-247f21d713cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "fd5616a2-eb8f-5af6-9d0e-030f2a25da4b"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fd5616a2-eb8f-5af6-9d0e-030f2a25da4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "7d952d80-0b61-464b-9378-59fede82d8ca"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7d952d80-0b61-464b-9378-59fede82d8ca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7d952d80-0b61-464b-9378-59fede82d8ca",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a308ada7-81f0-4517-ac19-0be2c35eb139",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "44450791-1194-4e87-8fd0-226c7e312954",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "52005f19-ea6c-4eba-8025-3834e999df3a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "52005f19-ea6c-4eba-8025-3834e999df3a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "52005f19-ea6c-4eba-8025-3834e999df3a",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "31321730-cdbf-4756-9cea-6901f74ad6d8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "fff1b05b-9056-4c41-b695-1e0478979683",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "bdb83198-bb51-42ac-a995-606e21c25f35"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bdb83198-bb51-42ac-a995-606e21c25f35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "bdb83198-bb51-42ac-a995-606e21c25f35",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "6b1c1a61-e61b-4191-9386-c88d8ea6e21f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "3e8db668-9c30-4c46-9a65-1549532a05e2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "5806f2a7-94ff-49c5-89f6-9db788759411"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "5806f2a7-94ff-49c5-89f6-9db788759411",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:11:53.968 21:08:16 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:53.968 21:08:16 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:11:53.968 21:08:16 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:11:53.968 21:08:16 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:53.968 21:08:16 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:11:53.968 21:08:16 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:11:53.968 21:08:16 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:53.968 21:08:16 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:11:53.968 21:08:16 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:11:53.968 21:08:16 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:53.968 21:08:16 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:11:53.968 21:08:16 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:11:53.968 21:08:16 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:53.968 21:08:16 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:11:53.968 21:08:16 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:11:53.968 21:08:16 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:53.968 21:08:16 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:11:53.968 21:08:16 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:11:53.968 21:08:16 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:53.968 21:08:16 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:11:53.968 21:08:16 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:11:53.968 21:08:16 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:53.968 21:08:16 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:11:53.968 21:08:16 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:11:53.968 21:08:16 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:53.968 21:08:16 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:11:53.968 21:08:16 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:11:53.968 21:08:16 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:53.968 21:08:16 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:11:53.968 21:08:16 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:11:53.968 21:08:16 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:53.968 21:08:16 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:11:53.968 21:08:16 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:11:53.968 21:08:16 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:53.968 21:08:16 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:11:53.968 21:08:16 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:11:53.968 21:08:16 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:53.968 21:08:16 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:11:53.968 21:08:16 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:11:53.969 21:08:16 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:53.969 21:08:16 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:11:53.969 21:08:16 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:11:53.969 21:08:16 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:53.969 21:08:16 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:53.969 21:08:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:53.969 21:08:16 -- common/autotest_common.sh@10 -- # set +x 00:11:53.969 ************************************ 00:11:53.969 START TEST bdev_fio_trim 00:11:53.969 ************************************ 00:11:53.969 21:08:16 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:53.969 21:08:16 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:53.969 21:08:16 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:11:53.969 21:08:16 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:11:53.969 21:08:16 -- common/autotest_common.sh@1318 -- # local sanitizers 00:11:53.969 21:08:16 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:53.969 21:08:16 -- common/autotest_common.sh@1320 -- # shift 00:11:53.969 21:08:16 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:11:53.969 21:08:16 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:53.969 21:08:16 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:53.969 21:08:16 -- common/autotest_common.sh@1324 -- # grep libasan 00:11:53.969 21:08:16 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:53.969 21:08:16 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:11:53.969 21:08:16 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:11:53.969 21:08:16 -- common/autotest_common.sh@1326 -- # break 00:11:53.969 21:08:16 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:53.969 21:08:16 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:53.969 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:53.969 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:53.969 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:53.969 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:53.969 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:53.969 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:53.969 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:53.969 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:53.969 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:53.969 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:53.969 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:53.969 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:53.969 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:53.969 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:53.969 fio-3.35 00:11:53.969 Starting 14 threads 00:12:06.304 00:12:06.304 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=123371: Fri Jun 7 21:08:27 2024 00:12:06.304 write: IOPS=130k, BW=509MiB/s (534MB/s)(5096MiB/10003msec); 0 zone resets 00:12:06.304 slat (usec): min=2, max=29349, avg=38.15, stdev=393.50 00:12:06.304 clat (usec): min=23, max=29955, avg=271.30, stdev=1077.83 00:12:06.304 lat (usec): min=32, max=29975, avg=309.45, stdev=1147.21 00:12:06.304 clat percentiles (usec): 00:12:06.304 | 50.000th=[ 182], 99.000th=[ 619], 99.900th=[16319], 99.990th=[22938], 00:12:06.304 | 99.999th=[29230] 00:12:06.304 bw ( KiB/s): min=297528, max=878080, per=100.00%, avg=522560.74, stdev=12043.65, samples=266 00:12:06.304 iops : min=74382, max=219520, avg=130640.16, stdev=3010.91, samples=266 00:12:06.304 trim: IOPS=130k, BW=509MiB/s (534MB/s)(5096MiB/10003msec); 0 zone resets 00:12:06.304 slat (usec): min=4, max=29676, avg=26.28, stdev=331.27 00:12:06.304 clat (usec): min=4, max=29975, avg=294.33, stdev=1113.33 00:12:06.304 lat (usec): min=13, max=29996, avg=320.61, stdev=1161.67 00:12:06.304 clat percentiles (usec): 00:12:06.304 | 50.000th=[ 204], 99.000th=[ 742], 99.900th=[16319], 99.990th=[24249], 00:12:06.304 | 99.999th=[29492] 00:12:06.304 bw ( KiB/s): min=297528, max=878144, per=100.00%, avg=522560.74, stdev=12043.48, samples=266 00:12:06.304 iops : min=74382, max=219536, avg=130640.16, stdev=3010.87, samples=266 00:12:06.304 lat (usec) : 10=0.08%, 20=0.25%, 50=1.13%, 100=7.64%, 250=64.02% 00:12:06.304 lat (usec) : 500=25.66%, 750=0.30%, 1000=0.16% 00:12:06.304 lat (msec) : 2=0.20%, 4=0.01%, 10=0.07%, 20=0.46%, 50=0.02% 00:12:06.304 cpu : usr=68.93%, sys=0.50%, ctx=169197, majf=0, minf=9008 00:12:06.304 IO depths : 1=12.3%, 2=24.6%, 4=50.1%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.304 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.304 issued rwts: total=0,1304503,1304503,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.304 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:06.304 00:12:06.304 Run status group 0 (all jobs): 00:12:06.304 WRITE: bw=509MiB/s (534MB/s), 509MiB/s-509MiB/s (534MB/s-534MB/s), io=5096MiB (5343MB), run=10003-10003msec 00:12:06.304 TRIM: bw=509MiB/s (534MB/s), 509MiB/s-509MiB/s (534MB/s-534MB/s), io=5096MiB (5343MB), run=10003-10003msec 00:12:06.304 ----------------------------------------------------- 00:12:06.304 Suppressions used: 00:12:06.304 count bytes template 00:12:06.304 14 129 /usr/src/fio/parse.c 00:12:06.304 2 596 libcrypto.so 00:12:06.304 ----------------------------------------------------- 00:12:06.304 00:12:06.304 00:12:06.304 real 0m11.744s 00:12:06.304 user 1m39.212s 00:12:06.304 sys 0m1.532s 00:12:06.304 ************************************ 00:12:06.304 END TEST bdev_fio_trim 00:12:06.304 ************************************ 00:12:06.304 21:08:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.304 21:08:27 -- common/autotest_common.sh@10 -- # set +x 00:12:06.304 21:08:28 -- bdev/blockdev.sh@366 -- # rm -f 00:12:06.304 21:08:28 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:06.304 21:08:28 -- bdev/blockdev.sh@368 -- # popd 00:12:06.304 /home/vagrant/spdk_repo/spdk 00:12:06.304 21:08:28 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:12:06.304 00:12:06.304 real 0m24.072s 00:12:06.304 user 3m14.874s 00:12:06.304 sys 0m5.731s 00:12:06.304 21:08:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.304 ************************************ 00:12:06.304 END TEST bdev_fio 00:12:06.304 ************************************ 00:12:06.304 21:08:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.304 21:08:28 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:06.304 21:08:28 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:06.304 21:08:28 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:12:06.304 21:08:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:06.304 21:08:28 -- common/autotest_common.sh@10 -- # set +x 00:12:06.304 ************************************ 00:12:06.304 START TEST bdev_verify 00:12:06.304 ************************************ 00:12:06.304 21:08:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:06.304 [2024-06-07 21:08:28.137380] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:06.304 [2024-06-07 21:08:28.138168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123566 ] 00:12:06.304 [2024-06-07 21:08:28.299532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:06.304 [2024-06-07 21:08:28.408047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.304 [2024-06-07 21:08:28.408055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.304 [2024-06-07 21:08:28.592538] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:06.304 [2024-06-07 21:08:28.592730] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:06.304 [2024-06-07 21:08:28.600420] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:06.304 [2024-06-07 21:08:28.600533] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:06.304 [2024-06-07 21:08:28.608514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:06.304 [2024-06-07 21:08:28.608598] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:06.304 [2024-06-07 21:08:28.608655] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:06.304 [2024-06-07 21:08:28.722989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:06.304 [2024-06-07 21:08:28.723228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.304 [2024-06-07 21:08:28.723305] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:06.304 [2024-06-07 21:08:28.723332] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.304 [2024-06-07 21:08:28.726712] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.304 [2024-06-07 21:08:28.726762] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:06.563 Running I/O for 5 seconds... 00:12:11.832 00:12:11.832 Latency(us) 00:12:11.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.832 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x1000 00:12:11.832 Malloc0 : 5.17 1528.96 5.97 0.00 0.00 82959.80 2100.13 200182.69 00:12:11.832 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x1000 length 0x1000 00:12:11.832 Malloc0 : 5.17 1552.14 6.06 0.00 0.00 81800.68 2293.76 276442.76 00:12:11.832 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x800 00:12:11.832 Malloc1p0 : 5.17 1048.62 4.10 0.00 0.00 120739.90 6136.55 186837.18 00:12:11.832 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x800 length 0x800 00:12:11.832 Malloc1p0 : 5.17 1081.45 4.22 0.00 0.00 117202.11 5391.83 169678.66 00:12:11.832 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x800 00:12:11.832 Malloc1p1 : 5.17 1048.16 4.09 0.00 0.00 120484.38 6285.50 180164.42 00:12:11.832 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x800 length 0x800 00:12:11.832 Malloc1p1 : 5.17 1080.93 4.22 0.00 0.00 116956.98 5808.87 162052.65 00:12:11.832 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x200 00:12:11.832 Malloc2p0 : 5.18 1047.68 4.09 0.00 0.00 120205.84 5868.45 175398.17 00:12:11.832 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x200 length 0x200 00:12:11.832 Malloc2p0 : 5.18 1080.44 4.22 0.00 0.00 116727.21 5064.15 156333.15 00:12:11.832 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x200 00:12:11.832 Malloc2p1 : 5.18 1047.25 4.09 0.00 0.00 119937.05 5659.93 169678.66 00:12:11.832 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x200 length 0x200 00:12:11.832 Malloc2p1 : 5.18 1079.96 4.22 0.00 0.00 116518.83 5242.88 151566.89 00:12:11.832 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x200 00:12:11.832 Malloc2p2 : 5.21 1057.65 4.13 0.00 0.00 119018.54 5481.19 164912.41 00:12:11.832 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x200 length 0x200 00:12:11.832 Malloc2p2 : 5.18 1079.44 4.22 0.00 0.00 116325.00 5034.36 146800.64 00:12:11.832 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x200 00:12:11.832 Malloc2p3 : 5.21 1057.04 4.13 0.00 0.00 118800.99 5808.87 160146.15 00:12:11.832 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x200 length 0x200 00:12:11.832 Malloc2p3 : 5.18 1079.19 4.22 0.00 0.00 116085.69 5600.35 141081.13 00:12:11.832 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x200 00:12:11.832 Malloc2p4 : 5.22 1056.45 4.13 0.00 0.00 118546.34 5600.35 154426.65 00:12:11.832 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x200 length 0x200 00:12:11.832 Malloc2p4 : 5.21 1090.28 4.26 0.00 0.00 115128.22 5213.09 137268.13 00:12:11.832 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x200 00:12:11.832 Malloc2p5 : 5.22 1055.82 4.12 0.00 0.00 118325.84 5898.24 148707.14 00:12:11.832 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x200 length 0x200 00:12:11.832 Malloc2p5 : 5.21 1090.03 4.26 0.00 0.00 114908.42 5064.15 132501.88 00:12:11.832 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x200 00:12:11.832 Malloc2p6 : 5.22 1055.19 4.12 0.00 0.00 118082.86 5659.93 143940.89 00:12:11.832 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x200 length 0x200 00:12:11.832 Malloc2p6 : 5.21 1089.73 4.26 0.00 0.00 114683.23 5779.08 126782.37 00:12:11.832 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x200 00:12:11.832 Malloc2p7 : 5.22 1054.58 4.12 0.00 0.00 117840.05 4557.73 138221.38 00:12:11.832 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x200 length 0x200 00:12:11.832 Malloc2p7 : 5.21 1089.11 4.25 0.00 0.00 114477.03 5123.72 122016.12 00:12:11.832 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x1000 00:12:11.832 TestPT : 5.23 1054.34 4.12 0.00 0.00 117644.32 5898.24 132501.88 00:12:11.832 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x1000 length 0x1000 00:12:11.832 TestPT : 5.22 1062.99 4.15 0.00 0.00 117019.84 5600.35 214481.45 00:12:11.832 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x2000 00:12:11.832 raid0 : 5.23 1054.05 4.12 0.00 0.00 117388.11 6374.87 125829.12 00:12:11.832 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x2000 length 0x2000 00:12:11.832 raid0 : 5.22 1087.84 4.25 0.00 0.00 114090.48 5064.15 106287.48 00:12:11.832 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x2000 00:12:11.832 concat0 : 5.23 1068.59 4.17 0.00 0.00 115916.16 3023.59 120109.61 00:12:11.832 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x2000 length 0x2000 00:12:11.832 concat0 : 5.22 1087.19 4.25 0.00 0.00 113871.95 5540.77 106764.10 00:12:11.832 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x1000 00:12:11.832 raid1 : 5.24 1068.36 4.17 0.00 0.00 115649.93 2815.07 118203.11 00:12:11.832 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x1000 length 0x1000 00:12:11.832 raid1 : 5.23 1102.41 4.31 0.00 0.00 112384.64 1772.45 108193.98 00:12:11.832 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x0 length 0x4e2 00:12:11.832 AIO0 : 5.24 1067.99 4.17 0.00 0.00 115427.82 2681.02 117726.49 00:12:11.832 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:11.832 Verification LBA range: start 0x4e2 length 0x4e2 00:12:11.832 AIO0 : 5.23 1101.68 4.30 0.00 0.00 112143.70 3172.54 108670.60 00:12:11.832 =================================================================================================================== 00:12:11.832 Total : 35205.55 137.52 0.00 0.00 113728.70 1772.45 276442.76 00:12:12.091 00:12:12.091 real 0m6.629s 00:12:12.091 user 0m11.885s 00:12:12.091 sys 0m0.672s 00:12:12.091 21:08:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.091 ************************************ 00:12:12.091 END TEST bdev_verify 00:12:12.091 ************************************ 00:12:12.091 21:08:34 -- common/autotest_common.sh@10 -- # set +x 00:12:12.091 21:08:34 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:12.091 21:08:34 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:12:12.091 21:08:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:12.091 21:08:34 -- common/autotest_common.sh@10 -- # set +x 00:12:12.091 ************************************ 00:12:12.091 START TEST bdev_verify_big_io 00:12:12.091 ************************************ 00:12:12.091 21:08:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:12.350 [2024-06-07 21:08:34.811831] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:12.350 [2024-06-07 21:08:34.812988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123665 ] 00:12:12.350 [2024-06-07 21:08:34.984561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:12.609 [2024-06-07 21:08:35.049251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.609 [2024-06-07 21:08:35.049258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.609 [2024-06-07 21:08:35.192973] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:12.609 [2024-06-07 21:08:35.193394] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:12.609 [2024-06-07 21:08:35.200940] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:12.609 [2024-06-07 21:08:35.201134] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:12.609 [2024-06-07 21:08:35.208992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:12.609 [2024-06-07 21:08:35.209167] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:12.609 [2024-06-07 21:08:35.209310] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:12.868 [2024-06-07 21:08:35.303100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:12.868 [2024-06-07 21:08:35.303531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.868 [2024-06-07 21:08:35.303646] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:12.868 [2024-06-07 21:08:35.303841] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.868 [2024-06-07 21:08:35.306754] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.868 [2024-06-07 21:08:35.306935] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:12.868 [2024-06-07 21:08:35.501752] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:12.868 [2024-06-07 21:08:35.503061] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:12.868 [2024-06-07 21:08:35.504768] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:12.868 [2024-06-07 21:08:35.506465] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:12.868 [2024-06-07 21:08:35.507621] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:12.868 [2024-06-07 21:08:35.509429] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:12.868 [2024-06-07 21:08:35.510708] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:12.868 [2024-06-07 21:08:35.512460] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:12.868 [2024-06-07 21:08:35.513683] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:12.868 [2024-06-07 21:08:35.515586] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:12.868 [2024-06-07 21:08:35.516710] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:12.868 [2024-06-07 21:08:35.518734] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:12.868 [2024-06-07 21:08:35.519902] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:12.868 [2024-06-07 21:08:35.521712] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:12.869 [2024-06-07 21:08:35.523539] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:12.869 [2024-06-07 21:08:35.524733] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:13.127 [2024-06-07 21:08:35.551665] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:13.127 [2024-06-07 21:08:35.553970] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:13.127 Running I/O for 5 seconds... 00:12:19.722 00:12:19.722 Latency(us) 00:12:19.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.722 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x0 length 0x100 00:12:19.722 Malloc0 : 5.71 306.87 19.18 0.00 0.00 402194.90 27048.49 1082893.03 00:12:19.722 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x100 length 0x100 00:12:19.722 Malloc0 : 5.72 307.34 19.21 0.00 0.00 409654.08 25141.99 1250665.19 00:12:19.722 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x0 length 0x80 00:12:19.722 Malloc1p0 : 5.84 174.41 10.90 0.00 0.00 693525.97 49092.42 1304047.24 00:12:19.722 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x80 length 0x80 00:12:19.722 Malloc1p0 : 5.72 236.03 14.75 0.00 0.00 526792.67 48139.17 1121023.07 00:12:19.722 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x0 length 0x80 00:12:19.722 Malloc1p1 : 5.97 106.01 6.63 0.00 0.00 1115874.87 49330.73 2348810.24 00:12:19.722 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x80 length 0x80 00:12:19.722 Malloc1p1 : 5.95 112.19 7.01 0.00 0.00 1066056.31 45756.04 2333558.23 00:12:19.722 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x0 length 0x20 00:12:19.722 Malloc2p0 : 5.78 60.36 3.77 0.00 0.00 496393.21 9651.67 861738.82 00:12:19.722 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x20 length 0x20 00:12:19.722 Malloc2p0 : 5.72 60.99 3.81 0.00 0.00 491117.12 9472.93 739722.71 00:12:19.722 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x0 length 0x20 00:12:19.722 Malloc2p1 : 5.78 60.34 3.77 0.00 0.00 493815.98 8936.73 846486.81 00:12:19.722 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x20 length 0x20 00:12:19.722 Malloc2p1 : 5.72 60.98 3.81 0.00 0.00 488698.91 9651.67 724470.69 00:12:19.722 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x0 length 0x20 00:12:19.722 Malloc2p2 : 5.78 60.33 3.77 0.00 0.00 491460.93 8936.73 831234.79 00:12:19.722 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x20 length 0x20 00:12:19.722 Malloc2p2 : 5.72 60.96 3.81 0.00 0.00 486318.94 8340.95 705405.67 00:12:19.722 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x0 length 0x20 00:12:19.722 Malloc2p3 : 5.79 60.32 3.77 0.00 0.00 489312.19 9115.46 815982.78 00:12:19.722 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x20 length 0x20 00:12:19.722 Malloc2p3 : 5.73 60.95 3.81 0.00 0.00 484158.25 7685.59 690153.66 00:12:19.722 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x0 length 0x20 00:12:19.722 Malloc2p4 : 5.79 60.31 3.77 0.00 0.00 486716.27 9055.88 796917.76 00:12:19.722 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x20 length 0x20 00:12:19.722 Malloc2p4 : 5.73 60.93 3.81 0.00 0.00 481877.21 9770.82 674901.64 00:12:19.722 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x0 length 0x20 00:12:19.722 Malloc2p5 : 5.79 60.30 3.77 0.00 0.00 484427.64 9949.56 781665.75 00:12:19.722 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x20 length 0x20 00:12:19.722 Malloc2p5 : 5.73 60.92 3.81 0.00 0.00 479446.11 8519.68 655836.63 00:12:19.722 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x0 length 0x20 00:12:19.722 Malloc2p6 : 5.79 60.28 3.77 0.00 0.00 481994.75 9949.56 758787.72 00:12:19.722 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x20 length 0x20 00:12:19.722 Malloc2p6 : 5.73 60.91 3.81 0.00 0.00 477194.26 8638.84 640584.61 00:12:19.722 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x0 length 0x20 00:12:19.722 Malloc2p7 : 5.79 60.27 3.77 0.00 0.00 479377.83 10664.49 739722.71 00:12:19.722 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x20 length 0x20 00:12:19.722 Malloc2p7 : 5.73 60.89 3.81 0.00 0.00 474748.14 8519.68 621519.59 00:12:19.722 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x0 length 0x100 00:12:19.722 TestPT : 5.92 112.74 7.05 0.00 0.00 1005109.79 54096.99 2303054.20 00:12:19.722 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x100 length 0x100 00:12:19.722 TestPT : 5.92 102.21 6.39 0.00 0.00 1104144.23 64821.06 2242046.14 00:12:19.722 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.722 Verification LBA range: start 0x0 length 0x200 00:12:19.722 raid0 : 6.02 115.42 7.21 0.00 0.00 959834.00 48377.48 2287802.18 00:12:19.722 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.723 Verification LBA range: start 0x200 length 0x200 00:12:19.723 raid0 : 5.96 116.62 7.29 0.00 0.00 961059.85 45994.36 2318306.21 00:12:19.723 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.723 Verification LBA range: start 0x0 length 0x200 00:12:19.723 concat0 : 5.99 120.78 7.55 0.00 0.00 901829.51 39321.60 2287802.18 00:12:19.723 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.723 Verification LBA range: start 0x200 length 0x200 00:12:19.723 concat0 : 5.96 121.46 7.59 0.00 0.00 907418.34 35031.97 2303054.20 00:12:19.723 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:19.723 Verification LBA range: start 0x0 length 0x100 00:12:19.723 raid1 : 6.00 144.58 9.04 0.00 0.00 746117.02 16920.20 2287802.18 00:12:19.723 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:19.723 Verification LBA range: start 0x100 length 0x100 00:12:19.723 raid1 : 5.96 139.04 8.69 0.00 0.00 786032.13 21924.77 2303054.20 00:12:19.723 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:19.723 Verification LBA range: start 0x0 length 0x4e 00:12:19.723 AIO0 : 6.04 158.17 9.89 0.00 0.00 408170.98 1251.14 1296421.24 00:12:19.723 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:19.723 Verification LBA range: start 0x4e length 0x4e 00:12:19.723 AIO0 : 5.96 140.85 8.80 0.00 0.00 465732.31 4587.52 1311673.25 00:12:19.723 =================================================================================================================== 00:12:19.723 Total : 3484.75 217.80 0.00 0.00 640070.06 1251.14 2348810.24 00:12:19.723 ************************************ 00:12:19.723 END TEST bdev_verify_big_io 00:12:19.723 ************************************ 00:12:19.723 00:12:19.723 real 0m7.356s 00:12:19.723 user 0m13.512s 00:12:19.723 sys 0m0.544s 00:12:19.723 21:08:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.723 21:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:19.723 21:08:42 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:19.723 21:08:42 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:19.723 21:08:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:19.723 21:08:42 -- common/autotest_common.sh@10 -- # set +x 00:12:19.723 ************************************ 00:12:19.723 START TEST bdev_write_zeroes 00:12:19.723 ************************************ 00:12:19.723 21:08:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:19.723 [2024-06-07 21:08:42.240432] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:19.723 [2024-06-07 21:08:42.241031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123792 ] 00:12:19.982 [2024-06-07 21:08:42.433122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.982 [2024-06-07 21:08:42.498466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.982 [2024-06-07 21:08:42.637672] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:19.982 [2024-06-07 21:08:42.638078] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:19.982 [2024-06-07 21:08:42.645641] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:19.982 [2024-06-07 21:08:42.645872] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:19.982 [2024-06-07 21:08:42.653687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:19.982 [2024-06-07 21:08:42.653899] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:19.982 [2024-06-07 21:08:42.654023] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:20.306 [2024-06-07 21:08:42.747304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:20.306 [2024-06-07 21:08:42.747780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.306 [2024-06-07 21:08:42.747878] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:20.306 [2024-06-07 21:08:42.748113] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.306 [2024-06-07 21:08:42.751131] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.306 [2024-06-07 21:08:42.751319] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:20.306 Running I/O for 1 seconds... 00:12:21.694 00:12:21.694 Latency(us) 00:12:21.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.694 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.694 Malloc0 : 1.04 5675.24 22.17 0.00 0.00 22539.08 703.77 40513.16 00:12:21.694 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.694 Malloc1p0 : 1.04 5668.35 22.14 0.00 0.00 22520.93 901.12 39559.91 00:12:21.694 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.694 Malloc1p1 : 1.04 5662.24 22.12 0.00 0.00 22494.78 1079.85 38368.35 00:12:21.695 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.695 Malloc2p0 : 1.04 5656.25 22.09 0.00 0.00 22473.85 882.50 37415.10 00:12:21.695 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.695 Malloc2p1 : 1.04 5650.00 22.07 0.00 0.00 22453.14 997.93 36223.53 00:12:21.695 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.695 Malloc2p2 : 1.04 5644.06 22.05 0.00 0.00 22428.12 860.16 35270.28 00:12:21.695 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.695 Malloc2p3 : 1.04 5638.10 22.02 0.00 0.00 22413.47 960.70 34317.03 00:12:21.695 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.695 Malloc2p4 : 1.05 5631.95 22.00 0.00 0.00 22390.13 875.05 33363.78 00:12:21.695 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.695 Malloc2p5 : 1.05 5625.76 21.98 0.00 0.00 22372.25 1035.17 32410.53 00:12:21.695 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.695 Malloc2p6 : 1.05 5619.85 21.95 0.00 0.00 22343.89 878.78 31457.28 00:12:21.695 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.695 Malloc2p7 : 1.05 5614.00 21.93 0.00 0.00 22321.57 968.15 30384.87 00:12:21.695 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.695 TestPT : 1.05 5607.94 21.91 0.00 0.00 22304.71 897.40 29431.62 00:12:21.695 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.695 raid0 : 1.05 5601.07 21.88 0.00 0.00 22267.77 1735.21 27525.12 00:12:21.695 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.695 concat0 : 1.05 5594.28 21.85 0.00 0.00 22220.06 1526.69 25976.09 00:12:21.695 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.695 raid1 : 1.06 5679.11 22.18 0.00 0.00 21807.78 2368.23 25499.46 00:12:21.695 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:21.695 AIO0 : 1.06 5667.27 22.14 0.00 0.00 21753.18 1571.37 25499.46 00:12:21.695 =================================================================================================================== 00:12:21.695 Total : 90235.48 352.48 0.00 0.00 22317.63 703.77 40513.16 00:12:21.954 ************************************ 00:12:21.954 END TEST bdev_write_zeroes 00:12:21.954 ************************************ 00:12:21.954 00:12:21.954 real 0m2.283s 00:12:21.954 user 0m1.746s 00:12:21.954 sys 0m0.361s 00:12:21.954 21:08:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.954 21:08:44 -- common/autotest_common.sh@10 -- # set +x 00:12:21.954 21:08:44 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:21.954 21:08:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:21.954 21:08:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:21.954 21:08:44 -- common/autotest_common.sh@10 -- # set +x 00:12:21.954 ************************************ 00:12:21.954 START TEST bdev_json_nonenclosed 00:12:21.954 ************************************ 00:12:21.954 21:08:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:21.954 [2024-06-07 21:08:44.557744] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:21.954 [2024-06-07 21:08:44.558817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123848 ] 00:12:22.214 [2024-06-07 21:08:44.726380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.214 [2024-06-07 21:08:44.784557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.214 [2024-06-07 21:08:44.785109] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:22.214 [2024-06-07 21:08:44.785253] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:22.473 00:12:22.473 real 0m0.387s 00:12:22.473 user 0m0.177s 00:12:22.473 sys 0m0.107s 00:12:22.473 21:08:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.473 21:08:44 -- common/autotest_common.sh@10 -- # set +x 00:12:22.473 ************************************ 00:12:22.473 END TEST bdev_json_nonenclosed 00:12:22.473 ************************************ 00:12:22.473 21:08:44 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:22.473 21:08:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:22.473 21:08:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:22.473 21:08:44 -- common/autotest_common.sh@10 -- # set +x 00:12:22.473 ************************************ 00:12:22.473 START TEST bdev_json_nonarray 00:12:22.473 ************************************ 00:12:22.473 21:08:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:22.473 [2024-06-07 21:08:44.995162] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:22.473 [2024-06-07 21:08:44.995705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123870 ] 00:12:22.732 [2024-06-07 21:08:45.169073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.732 [2024-06-07 21:08:45.235006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.732 [2024-06-07 21:08:45.235576] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:22.732 [2024-06-07 21:08:45.235717] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:22.732 ************************************ 00:12:22.732 END TEST bdev_json_nonarray 00:12:22.732 ************************************ 00:12:22.732 00:12:22.732 real 0m0.397s 00:12:22.732 user 0m0.175s 00:12:22.732 sys 0m0.121s 00:12:22.732 21:08:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.732 21:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.732 21:08:45 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:12:22.732 21:08:45 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:12:22.732 21:08:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:22.732 21:08:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:22.732 21:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.732 ************************************ 00:12:22.732 START TEST bdev_qos 00:12:22.732 ************************************ 00:12:22.732 Process qos testing pid: 123901 00:12:22.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.732 21:08:45 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:12:22.732 21:08:45 -- bdev/blockdev.sh@444 -- # QOS_PID=123901 00:12:22.732 21:08:45 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 123901' 00:12:22.732 21:08:45 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:12:22.732 21:08:45 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:12:22.732 21:08:45 -- bdev/blockdev.sh@447 -- # waitforlisten 123901 00:12:22.732 21:08:45 -- common/autotest_common.sh@819 -- # '[' -z 123901 ']' 00:12:22.732 21:08:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.732 21:08:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:22.732 21:08:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.732 21:08:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:22.732 21:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:22.991 [2024-06-07 21:08:45.441334] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:22.991 [2024-06-07 21:08:45.441750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123901 ] 00:12:22.991 [2024-06-07 21:08:45.598588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.249 [2024-06-07 21:08:45.684135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.817 21:08:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:23.817 21:08:46 -- common/autotest_common.sh@852 -- # return 0 00:12:23.817 21:08:46 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:12:23.817 21:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.817 21:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:23.817 Malloc_0 00:12:23.817 21:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.817 21:08:46 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:12:23.817 21:08:46 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:12:23.817 21:08:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:23.817 21:08:46 -- common/autotest_common.sh@889 -- # local i 00:12:23.817 21:08:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:23.817 21:08:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:23.817 21:08:46 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:23.817 21:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.817 21:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:23.817 21:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.817 21:08:46 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:12:23.817 21:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.817 21:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:23.817 [ 00:12:23.817 { 00:12:23.817 "name": "Malloc_0", 00:12:23.817 "aliases": [ 00:12:23.817 "66b2ff3f-aa52-4c93-9e09-debbc066ea82" 00:12:23.817 ], 00:12:23.817 "product_name": "Malloc disk", 00:12:23.817 "block_size": 512, 00:12:23.817 "num_blocks": 262144, 00:12:23.817 "uuid": "66b2ff3f-aa52-4c93-9e09-debbc066ea82", 00:12:23.817 "assigned_rate_limits": { 00:12:23.817 "rw_ios_per_sec": 0, 00:12:23.817 "rw_mbytes_per_sec": 0, 00:12:23.817 "r_mbytes_per_sec": 0, 00:12:23.817 "w_mbytes_per_sec": 0 00:12:23.817 }, 00:12:23.817 "claimed": false, 00:12:23.817 "zoned": false, 00:12:23.817 "supported_io_types": { 00:12:23.817 "read": true, 00:12:23.817 "write": true, 00:12:23.817 "unmap": true, 00:12:23.817 "write_zeroes": true, 00:12:23.817 "flush": true, 00:12:23.817 "reset": true, 00:12:23.817 "compare": false, 00:12:23.817 "compare_and_write": false, 00:12:23.817 "abort": true, 00:12:23.817 "nvme_admin": false, 00:12:23.817 "nvme_io": false 00:12:23.817 }, 00:12:23.817 "memory_domains": [ 00:12:23.817 { 00:12:23.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.817 "dma_device_type": 2 00:12:23.817 } 00:12:23.817 ], 00:12:23.817 "driver_specific": {} 00:12:23.817 } 00:12:23.817 ] 00:12:23.817 21:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.817 21:08:46 -- common/autotest_common.sh@895 -- # return 0 00:12:23.817 21:08:46 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:12:23.817 21:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.817 21:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:23.817 Null_1 00:12:23.817 21:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.817 21:08:46 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:12:23.817 21:08:46 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:12:23.817 21:08:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:23.817 21:08:46 -- common/autotest_common.sh@889 -- # local i 00:12:23.817 21:08:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:23.817 21:08:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:23.817 21:08:46 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:23.817 21:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.817 21:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:23.817 21:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.817 21:08:46 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:12:23.817 21:08:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.817 21:08:46 -- common/autotest_common.sh@10 -- # set +x 00:12:23.817 [ 00:12:23.817 { 00:12:23.817 "name": "Null_1", 00:12:23.817 "aliases": [ 00:12:23.817 "dcb4d2bf-01b9-46dc-a714-7f15f5b9262f" 00:12:23.817 ], 00:12:23.817 "product_name": "Null disk", 00:12:23.817 "block_size": 512, 00:12:23.817 "num_blocks": 262144, 00:12:23.817 "uuid": "dcb4d2bf-01b9-46dc-a714-7f15f5b9262f", 00:12:23.817 "assigned_rate_limits": { 00:12:23.817 "rw_ios_per_sec": 0, 00:12:23.817 "rw_mbytes_per_sec": 0, 00:12:23.817 "r_mbytes_per_sec": 0, 00:12:23.817 "w_mbytes_per_sec": 0 00:12:23.817 }, 00:12:23.817 "claimed": false, 00:12:23.818 "zoned": false, 00:12:23.818 "supported_io_types": { 00:12:23.818 "read": true, 00:12:23.818 "write": true, 00:12:23.818 "unmap": false, 00:12:23.818 "write_zeroes": true, 00:12:23.818 "flush": false, 00:12:23.818 "reset": true, 00:12:23.818 "compare": false, 00:12:23.818 "compare_and_write": false, 00:12:23.818 "abort": true, 00:12:23.818 "nvme_admin": false, 00:12:23.818 "nvme_io": false 00:12:23.818 }, 00:12:23.818 "driver_specific": {} 00:12:23.818 } 00:12:23.818 ] 00:12:23.818 21:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.818 21:08:46 -- common/autotest_common.sh@895 -- # return 0 00:12:23.818 21:08:46 -- bdev/blockdev.sh@455 -- # qos_function_test 00:12:23.818 21:08:46 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:23.818 21:08:46 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:12:23.818 21:08:46 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:12:23.818 21:08:46 -- bdev/blockdev.sh@410 -- # local io_result=0 00:12:23.818 21:08:46 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:12:23.818 21:08:46 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:12:23.818 21:08:46 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:12:23.818 21:08:46 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:12:23.818 21:08:46 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:12:23.818 21:08:46 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:23.818 21:08:46 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:23.818 21:08:46 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:12:23.818 21:08:46 -- bdev/blockdev.sh@376 -- # tail -1 00:12:24.076 Running I/O for 60 seconds... 00:12:29.348 21:08:51 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 78326.19 313304.75 0.00 0.00 317440.00 0.00 0.00 ' 00:12:29.348 21:08:51 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:12:29.348 21:08:51 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:12:29.348 21:08:51 -- bdev/blockdev.sh@378 -- # iostat_result=78326.19 00:12:29.348 21:08:51 -- bdev/blockdev.sh@383 -- # echo 78326 00:12:29.348 21:08:51 -- bdev/blockdev.sh@414 -- # io_result=78326 00:12:29.348 21:08:51 -- bdev/blockdev.sh@416 -- # iops_limit=19000 00:12:29.348 21:08:51 -- bdev/blockdev.sh@417 -- # '[' 19000 -gt 1000 ']' 00:12:29.348 21:08:51 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 19000 Malloc_0 00:12:29.348 21:08:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.348 21:08:51 -- common/autotest_common.sh@10 -- # set +x 00:12:29.348 21:08:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.348 21:08:51 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 19000 IOPS Malloc_0 00:12:29.348 21:08:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:29.348 21:08:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:29.348 21:08:51 -- common/autotest_common.sh@10 -- # set +x 00:12:29.348 ************************************ 00:12:29.348 START TEST bdev_qos_iops 00:12:29.348 ************************************ 00:12:29.348 21:08:51 -- common/autotest_common.sh@1104 -- # run_qos_test 19000 IOPS Malloc_0 00:12:29.348 21:08:51 -- bdev/blockdev.sh@387 -- # local qos_limit=19000 00:12:29.348 21:08:51 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:12:29.348 21:08:51 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:12:29.348 21:08:51 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:12:29.348 21:08:51 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:12:29.348 21:08:51 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:29.348 21:08:51 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:29.348 21:08:51 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:12:29.348 21:08:51 -- bdev/blockdev.sh@376 -- # tail -1 00:12:34.615 21:08:56 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 19007.36 76029.43 0.00 0.00 76988.00 0.00 0.00 ' 00:12:34.615 21:08:56 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:12:34.615 21:08:56 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:12:34.615 21:08:56 -- bdev/blockdev.sh@378 -- # iostat_result=19007.36 00:12:34.616 21:08:56 -- bdev/blockdev.sh@383 -- # echo 19007 00:12:34.616 ************************************ 00:12:34.616 END TEST bdev_qos_iops 00:12:34.616 ************************************ 00:12:34.616 21:08:56 -- bdev/blockdev.sh@390 -- # qos_result=19007 00:12:34.616 21:08:56 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:12:34.616 21:08:56 -- bdev/blockdev.sh@394 -- # lower_limit=17100 00:12:34.616 21:08:56 -- bdev/blockdev.sh@395 -- # upper_limit=20900 00:12:34.616 21:08:56 -- bdev/blockdev.sh@398 -- # '[' 19007 -lt 17100 ']' 00:12:34.616 21:08:56 -- bdev/blockdev.sh@398 -- # '[' 19007 -gt 20900 ']' 00:12:34.616 00:12:34.616 real 0m5.201s 00:12:34.616 user 0m0.114s 00:12:34.616 sys 0m0.020s 00:12:34.616 21:08:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:34.616 21:08:56 -- common/autotest_common.sh@10 -- # set +x 00:12:34.616 21:08:56 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:12:34.616 21:08:56 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:12:34.616 21:08:56 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:12:34.616 21:08:56 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:34.616 21:08:56 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:34.616 21:08:56 -- bdev/blockdev.sh@376 -- # grep Null_1 00:12:34.616 21:08:56 -- bdev/blockdev.sh@376 -- # tail -1 00:12:39.882 21:09:02 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 27962.25 111849.00 0.00 0.00 113664.00 0.00 0.00 ' 00:12:39.882 21:09:02 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:12:39.882 21:09:02 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:39.882 21:09:02 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:12:39.882 21:09:02 -- bdev/blockdev.sh@380 -- # iostat_result=113664.00 00:12:39.882 21:09:02 -- bdev/blockdev.sh@383 -- # echo 113664 00:12:39.882 21:09:02 -- bdev/blockdev.sh@425 -- # bw_limit=113664 00:12:39.882 21:09:02 -- bdev/blockdev.sh@426 -- # bw_limit=11 00:12:39.882 21:09:02 -- bdev/blockdev.sh@427 -- # '[' 11 -lt 2 ']' 00:12:39.882 21:09:02 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 11 Null_1 00:12:39.882 21:09:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.882 21:09:02 -- common/autotest_common.sh@10 -- # set +x 00:12:39.882 21:09:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.882 21:09:02 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 11 BANDWIDTH Null_1 00:12:39.882 21:09:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:39.882 21:09:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:39.882 21:09:02 -- common/autotest_common.sh@10 -- # set +x 00:12:39.882 ************************************ 00:12:39.882 START TEST bdev_qos_bw 00:12:39.882 ************************************ 00:12:39.882 21:09:02 -- common/autotest_common.sh@1104 -- # run_qos_test 11 BANDWIDTH Null_1 00:12:39.882 21:09:02 -- bdev/blockdev.sh@387 -- # local qos_limit=11 00:12:39.882 21:09:02 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:12:39.882 21:09:02 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:12:39.882 21:09:02 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:12:39.882 21:09:02 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:12:39.882 21:09:02 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:39.882 21:09:02 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:39.882 21:09:02 -- bdev/blockdev.sh@376 -- # grep Null_1 00:12:39.882 21:09:02 -- bdev/blockdev.sh@376 -- # tail -1 00:12:45.147 21:09:07 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2815.47 11261.88 0.00 0.00 11476.00 0.00 0.00 ' 00:12:45.147 21:09:07 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:12:45.147 21:09:07 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:45.147 21:09:07 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:12:45.147 21:09:07 -- bdev/blockdev.sh@380 -- # iostat_result=11476.00 00:12:45.147 21:09:07 -- bdev/blockdev.sh@383 -- # echo 11476 00:12:45.147 ************************************ 00:12:45.147 END TEST bdev_qos_bw 00:12:45.147 ************************************ 00:12:45.147 21:09:07 -- bdev/blockdev.sh@390 -- # qos_result=11476 00:12:45.147 21:09:07 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:45.147 21:09:07 -- bdev/blockdev.sh@392 -- # qos_limit=11264 00:12:45.147 21:09:07 -- bdev/blockdev.sh@394 -- # lower_limit=10137 00:12:45.147 21:09:07 -- bdev/blockdev.sh@395 -- # upper_limit=12390 00:12:45.147 21:09:07 -- bdev/blockdev.sh@398 -- # '[' 11476 -lt 10137 ']' 00:12:45.147 21:09:07 -- bdev/blockdev.sh@398 -- # '[' 11476 -gt 12390 ']' 00:12:45.147 00:12:45.147 real 0m5.224s 00:12:45.147 user 0m0.109s 00:12:45.147 sys 0m0.024s 00:12:45.147 21:09:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.147 21:09:07 -- common/autotest_common.sh@10 -- # set +x 00:12:45.147 21:09:07 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:12:45.147 21:09:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.147 21:09:07 -- common/autotest_common.sh@10 -- # set +x 00:12:45.147 21:09:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.147 21:09:07 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:12:45.147 21:09:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:45.147 21:09:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:45.147 21:09:07 -- common/autotest_common.sh@10 -- # set +x 00:12:45.147 ************************************ 00:12:45.147 START TEST bdev_qos_ro_bw 00:12:45.147 ************************************ 00:12:45.147 21:09:07 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:12:45.147 21:09:07 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:12:45.147 21:09:07 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:12:45.147 21:09:07 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:12:45.147 21:09:07 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:12:45.147 21:09:07 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:12:45.147 21:09:07 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:45.147 21:09:07 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:45.147 21:09:07 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:12:45.147 21:09:07 -- bdev/blockdev.sh@376 -- # tail -1 00:12:50.413 21:09:12 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.61 2046.43 0.00 0.00 2060.00 0.00 0.00 ' 00:12:50.413 21:09:12 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:12:50.413 21:09:12 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:50.413 21:09:12 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:12:50.413 21:09:12 -- bdev/blockdev.sh@380 -- # iostat_result=2060.00 00:12:50.413 21:09:12 -- bdev/blockdev.sh@383 -- # echo 2060 00:12:50.413 21:09:12 -- bdev/blockdev.sh@390 -- # qos_result=2060 00:12:50.413 ************************************ 00:12:50.413 END TEST bdev_qos_ro_bw 00:12:50.413 ************************************ 00:12:50.413 21:09:12 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:50.413 21:09:12 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:12:50.413 21:09:12 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:12:50.413 21:09:12 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:12:50.413 21:09:12 -- bdev/blockdev.sh@398 -- # '[' 2060 -lt 1843 ']' 00:12:50.413 21:09:12 -- bdev/blockdev.sh@398 -- # '[' 2060 -gt 2252 ']' 00:12:50.413 00:12:50.413 real 0m5.158s 00:12:50.413 user 0m0.116s 00:12:50.413 sys 0m0.017s 00:12:50.413 21:09:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.413 21:09:12 -- common/autotest_common.sh@10 -- # set +x 00:12:50.413 21:09:12 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:12:50.413 21:09:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.413 21:09:12 -- common/autotest_common.sh@10 -- # set +x 00:12:50.671 21:09:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.671 21:09:13 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:12:50.671 21:09:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.671 21:09:13 -- common/autotest_common.sh@10 -- # set +x 00:12:50.671 00:12:50.671 Latency(us) 00:12:50.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.671 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:50.671 Malloc_0 : 26.62 25993.31 101.54 0.00 0.00 9756.46 2129.92 503316.48 00:12:50.671 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:50.671 Null_1 : 26.74 26366.69 102.99 0.00 0.00 9689.53 711.21 115343.36 00:12:50.671 =================================================================================================================== 00:12:50.671 Total : 52360.00 204.53 0.00 0.00 9722.69 711.21 503316.48 00:12:50.671 0 00:12:50.671 21:09:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.671 21:09:13 -- bdev/blockdev.sh@459 -- # killprocess 123901 00:12:50.671 21:09:13 -- common/autotest_common.sh@926 -- # '[' -z 123901 ']' 00:12:50.671 21:09:13 -- common/autotest_common.sh@930 -- # kill -0 123901 00:12:50.671 21:09:13 -- common/autotest_common.sh@931 -- # uname 00:12:50.671 21:09:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:50.671 21:09:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123901 00:12:50.930 killing process with pid 123901 00:12:50.930 Received shutdown signal, test time was about 26.769436 seconds 00:12:50.930 00:12:50.930 Latency(us) 00:12:50.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.930 =================================================================================================================== 00:12:50.930 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:50.930 21:09:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:50.930 21:09:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:50.930 21:09:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123901' 00:12:50.930 21:09:13 -- common/autotest_common.sh@945 -- # kill 123901 00:12:50.930 21:09:13 -- common/autotest_common.sh@950 -- # wait 123901 00:12:51.189 21:09:13 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:12:51.189 00:12:51.189 real 0m28.215s 00:12:51.189 user 0m28.929s 00:12:51.189 sys 0m0.588s 00:12:51.189 21:09:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:51.189 21:09:13 -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 ************************************ 00:12:51.189 END TEST bdev_qos 00:12:51.189 ************************************ 00:12:51.189 21:09:13 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:12:51.189 21:09:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:51.189 21:09:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:51.189 21:09:13 -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 ************************************ 00:12:51.189 START TEST bdev_qd_sampling 00:12:51.189 ************************************ 00:12:51.189 Process bdev QD sampling period testing pid: 124416 00:12:51.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.189 21:09:13 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:12:51.189 21:09:13 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:12:51.189 21:09:13 -- bdev/blockdev.sh@539 -- # QD_PID=124416 00:12:51.189 21:09:13 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 124416' 00:12:51.189 21:09:13 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:12:51.189 21:09:13 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:12:51.189 21:09:13 -- bdev/blockdev.sh@542 -- # waitforlisten 124416 00:12:51.189 21:09:13 -- common/autotest_common.sh@819 -- # '[' -z 124416 ']' 00:12:51.189 21:09:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.189 21:09:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:51.189 21:09:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.189 21:09:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:51.189 21:09:13 -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 [2024-06-07 21:09:13.719095] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:51.189 [2024-06-07 21:09:13.719565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124416 ] 00:12:51.448 [2024-06-07 21:09:13.893776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:51.448 [2024-06-07 21:09:13.979563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.448 [2024-06-07 21:09:13.979577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.015 21:09:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:52.015 21:09:14 -- common/autotest_common.sh@852 -- # return 0 00:12:52.015 21:09:14 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:12:52.015 21:09:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.015 21:09:14 -- common/autotest_common.sh@10 -- # set +x 00:12:52.274 Malloc_QD 00:12:52.274 21:09:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.274 21:09:14 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:12:52.274 21:09:14 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:12:52.274 21:09:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:52.274 21:09:14 -- common/autotest_common.sh@889 -- # local i 00:12:52.274 21:09:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:52.274 21:09:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:52.274 21:09:14 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:52.274 21:09:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.274 21:09:14 -- common/autotest_common.sh@10 -- # set +x 00:12:52.274 21:09:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.274 21:09:14 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:12:52.274 21:09:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.274 21:09:14 -- common/autotest_common.sh@10 -- # set +x 00:12:52.274 [ 00:12:52.274 { 00:12:52.274 "name": "Malloc_QD", 00:12:52.274 "aliases": [ 00:12:52.274 "e8004666-1838-446a-8636-8fc7acebc1bb" 00:12:52.274 ], 00:12:52.274 "product_name": "Malloc disk", 00:12:52.274 "block_size": 512, 00:12:52.274 "num_blocks": 262144, 00:12:52.274 "uuid": "e8004666-1838-446a-8636-8fc7acebc1bb", 00:12:52.274 "assigned_rate_limits": { 00:12:52.274 "rw_ios_per_sec": 0, 00:12:52.274 "rw_mbytes_per_sec": 0, 00:12:52.274 "r_mbytes_per_sec": 0, 00:12:52.274 "w_mbytes_per_sec": 0 00:12:52.274 }, 00:12:52.274 "claimed": false, 00:12:52.274 "zoned": false, 00:12:52.274 "supported_io_types": { 00:12:52.274 "read": true, 00:12:52.274 "write": true, 00:12:52.274 "unmap": true, 00:12:52.274 "write_zeroes": true, 00:12:52.274 "flush": true, 00:12:52.274 "reset": true, 00:12:52.274 "compare": false, 00:12:52.274 "compare_and_write": false, 00:12:52.274 "abort": true, 00:12:52.274 "nvme_admin": false, 00:12:52.274 "nvme_io": false 00:12:52.274 }, 00:12:52.274 "memory_domains": [ 00:12:52.274 { 00:12:52.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.274 "dma_device_type": 2 00:12:52.274 } 00:12:52.274 ], 00:12:52.274 "driver_specific": {} 00:12:52.274 } 00:12:52.274 ] 00:12:52.274 21:09:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.274 21:09:14 -- common/autotest_common.sh@895 -- # return 0 00:12:52.274 21:09:14 -- bdev/blockdev.sh@548 -- # sleep 2 00:12:52.274 21:09:14 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:52.274 Running I/O for 5 seconds... 00:12:54.178 21:09:16 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:12:54.178 21:09:16 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:12:54.178 21:09:16 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:12:54.178 21:09:16 -- bdev/blockdev.sh@519 -- # local iostats 00:12:54.178 21:09:16 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:12:54.178 21:09:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.178 21:09:16 -- common/autotest_common.sh@10 -- # set +x 00:12:54.178 21:09:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.178 21:09:16 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:12:54.178 21:09:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.178 21:09:16 -- common/autotest_common.sh@10 -- # set +x 00:12:54.178 21:09:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.178 21:09:16 -- bdev/blockdev.sh@523 -- # iostats='{ 00:12:54.178 "tick_rate": 2200000000, 00:12:54.178 "ticks": 1573556802034, 00:12:54.178 "bdevs": [ 00:12:54.178 { 00:12:54.178 "name": "Malloc_QD", 00:12:54.178 "bytes_read": 930124288, 00:12:54.178 "num_read_ops": 227075, 00:12:54.178 "bytes_written": 0, 00:12:54.178 "num_write_ops": 0, 00:12:54.178 "bytes_unmapped": 0, 00:12:54.178 "num_unmap_ops": 0, 00:12:54.178 "bytes_copied": 0, 00:12:54.178 "num_copy_ops": 0, 00:12:54.178 "read_latency_ticks": 2171180160155, 00:12:54.178 "max_read_latency_ticks": 13038368, 00:12:54.178 "min_read_latency_ticks": 402506, 00:12:54.178 "write_latency_ticks": 0, 00:12:54.178 "max_write_latency_ticks": 0, 00:12:54.178 "min_write_latency_ticks": 0, 00:12:54.178 "unmap_latency_ticks": 0, 00:12:54.178 "max_unmap_latency_ticks": 0, 00:12:54.178 "min_unmap_latency_ticks": 0, 00:12:54.178 "copy_latency_ticks": 0, 00:12:54.178 "max_copy_latency_ticks": 0, 00:12:54.178 "min_copy_latency_ticks": 0, 00:12:54.178 "io_error": {}, 00:12:54.178 "queue_depth_polling_period": 10, 00:12:54.178 "queue_depth": 512, 00:12:54.178 "io_time": 20, 00:12:54.178 "weighted_io_time": 10240 00:12:54.178 } 00:12:54.178 ] 00:12:54.178 }' 00:12:54.178 21:09:16 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:12:54.178 21:09:16 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:12:54.178 21:09:16 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:12:54.178 21:09:16 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:12:54.178 21:09:16 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:12:54.178 21:09:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.178 21:09:16 -- common/autotest_common.sh@10 -- # set +x 00:12:54.437 00:12:54.437 Latency(us) 00:12:54.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.437 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:12:54.437 Malloc_QD : 2.01 58462.99 228.37 0.00 0.00 4367.62 1280.93 5928.03 00:12:54.437 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:54.437 Malloc_QD : 2.01 59089.45 230.82 0.00 0.00 4321.96 1117.09 5183.30 00:12:54.438 =================================================================================================================== 00:12:54.438 Total : 117552.44 459.19 0.00 0.00 4344.67 1117.09 5928.03 00:12:54.438 0 00:12:54.438 21:09:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.438 21:09:16 -- bdev/blockdev.sh@552 -- # killprocess 124416 00:12:54.438 21:09:16 -- common/autotest_common.sh@926 -- # '[' -z 124416 ']' 00:12:54.438 21:09:16 -- common/autotest_common.sh@930 -- # kill -0 124416 00:12:54.438 21:09:16 -- common/autotest_common.sh@931 -- # uname 00:12:54.438 21:09:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:54.438 21:09:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124416 00:12:54.438 killing process with pid 124416 00:12:54.438 Received shutdown signal, test time was about 2.056281 seconds 00:12:54.438 00:12:54.438 Latency(us) 00:12:54.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.438 =================================================================================================================== 00:12:54.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:54.438 21:09:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:54.438 21:09:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:54.438 21:09:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124416' 00:12:54.438 21:09:16 -- common/autotest_common.sh@945 -- # kill 124416 00:12:54.438 21:09:16 -- common/autotest_common.sh@950 -- # wait 124416 00:12:54.697 ************************************ 00:12:54.697 END TEST bdev_qd_sampling 00:12:54.697 ************************************ 00:12:54.697 21:09:17 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:12:54.697 00:12:54.697 real 0m3.498s 00:12:54.697 user 0m6.784s 00:12:54.697 sys 0m0.319s 00:12:54.697 21:09:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.697 21:09:17 -- common/autotest_common.sh@10 -- # set +x 00:12:54.697 21:09:17 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:12:54.697 21:09:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:54.697 21:09:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:54.697 21:09:17 -- common/autotest_common.sh@10 -- # set +x 00:12:54.697 ************************************ 00:12:54.697 START TEST bdev_error 00:12:54.697 ************************************ 00:12:54.697 Process error testing pid: 124519 00:12:54.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.697 21:09:17 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:12:54.697 21:09:17 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:12:54.697 21:09:17 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:12:54.697 21:09:17 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:12:54.697 21:09:17 -- bdev/blockdev.sh@470 -- # ERR_PID=124519 00:12:54.697 21:09:17 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 124519' 00:12:54.697 21:09:17 -- bdev/blockdev.sh@472 -- # waitforlisten 124519 00:12:54.697 21:09:17 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:12:54.697 21:09:17 -- common/autotest_common.sh@819 -- # '[' -z 124519 ']' 00:12:54.697 21:09:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.697 21:09:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:54.697 21:09:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.697 21:09:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:54.697 21:09:17 -- common/autotest_common.sh@10 -- # set +x 00:12:54.697 [2024-06-07 21:09:17.270468] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:54.697 [2024-06-07 21:09:17.270975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124519 ] 00:12:54.955 [2024-06-07 21:09:17.436276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.955 [2024-06-07 21:09:17.501951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.892 21:09:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:55.892 21:09:18 -- common/autotest_common.sh@852 -- # return 0 00:12:55.892 21:09:18 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:12:55.892 21:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.892 21:09:18 -- common/autotest_common.sh@10 -- # set +x 00:12:55.892 Dev_1 00:12:55.892 21:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.892 21:09:18 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:12:55.892 21:09:18 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:12:55.892 21:09:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:55.892 21:09:18 -- common/autotest_common.sh@889 -- # local i 00:12:55.892 21:09:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:55.892 21:09:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:55.892 21:09:18 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:55.892 21:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.892 21:09:18 -- common/autotest_common.sh@10 -- # set +x 00:12:55.892 21:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.892 21:09:18 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:12:55.892 21:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.892 21:09:18 -- common/autotest_common.sh@10 -- # set +x 00:12:55.892 [ 00:12:55.892 { 00:12:55.892 "name": "Dev_1", 00:12:55.892 "aliases": [ 00:12:55.892 "efbd7494-a491-44ee-95a1-9811215e4466" 00:12:55.892 ], 00:12:55.892 "product_name": "Malloc disk", 00:12:55.892 "block_size": 512, 00:12:55.892 "num_blocks": 262144, 00:12:55.892 "uuid": "efbd7494-a491-44ee-95a1-9811215e4466", 00:12:55.892 "assigned_rate_limits": { 00:12:55.892 "rw_ios_per_sec": 0, 00:12:55.892 "rw_mbytes_per_sec": 0, 00:12:55.892 "r_mbytes_per_sec": 0, 00:12:55.892 "w_mbytes_per_sec": 0 00:12:55.892 }, 00:12:55.892 "claimed": false, 00:12:55.892 "zoned": false, 00:12:55.892 "supported_io_types": { 00:12:55.892 "read": true, 00:12:55.892 "write": true, 00:12:55.892 "unmap": true, 00:12:55.892 "write_zeroes": true, 00:12:55.892 "flush": true, 00:12:55.892 "reset": true, 00:12:55.892 "compare": false, 00:12:55.892 "compare_and_write": false, 00:12:55.892 "abort": true, 00:12:55.892 "nvme_admin": false, 00:12:55.892 "nvme_io": false 00:12:55.892 }, 00:12:55.892 "memory_domains": [ 00:12:55.892 { 00:12:55.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.892 "dma_device_type": 2 00:12:55.892 } 00:12:55.892 ], 00:12:55.892 "driver_specific": {} 00:12:55.892 } 00:12:55.892 ] 00:12:55.892 21:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.892 21:09:18 -- common/autotest_common.sh@895 -- # return 0 00:12:55.892 21:09:18 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:12:55.892 21:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.892 21:09:18 -- common/autotest_common.sh@10 -- # set +x 00:12:55.892 true 00:12:55.892 21:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.892 21:09:18 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:12:55.892 21:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.892 21:09:18 -- common/autotest_common.sh@10 -- # set +x 00:12:55.892 Dev_2 00:12:55.892 21:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.892 21:09:18 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:12:55.892 21:09:18 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:12:55.892 21:09:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:55.892 21:09:18 -- common/autotest_common.sh@889 -- # local i 00:12:55.892 21:09:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:55.892 21:09:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:55.892 21:09:18 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:55.892 21:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.892 21:09:18 -- common/autotest_common.sh@10 -- # set +x 00:12:55.892 21:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.892 21:09:18 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:12:55.892 21:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.892 21:09:18 -- common/autotest_common.sh@10 -- # set +x 00:12:55.892 [ 00:12:55.892 { 00:12:55.892 "name": "Dev_2", 00:12:55.892 "aliases": [ 00:12:55.892 "b52c73ec-222d-4339-afa1-e3f4de832e94" 00:12:55.892 ], 00:12:55.892 "product_name": "Malloc disk", 00:12:55.892 "block_size": 512, 00:12:55.892 "num_blocks": 262144, 00:12:55.892 "uuid": "b52c73ec-222d-4339-afa1-e3f4de832e94", 00:12:55.892 "assigned_rate_limits": { 00:12:55.892 "rw_ios_per_sec": 0, 00:12:55.892 "rw_mbytes_per_sec": 0, 00:12:55.892 "r_mbytes_per_sec": 0, 00:12:55.892 "w_mbytes_per_sec": 0 00:12:55.892 }, 00:12:55.892 "claimed": false, 00:12:55.892 "zoned": false, 00:12:55.892 "supported_io_types": { 00:12:55.892 "read": true, 00:12:55.892 "write": true, 00:12:55.892 "unmap": true, 00:12:55.892 "write_zeroes": true, 00:12:55.892 "flush": true, 00:12:55.892 "reset": true, 00:12:55.892 "compare": false, 00:12:55.892 "compare_and_write": false, 00:12:55.892 "abort": true, 00:12:55.892 "nvme_admin": false, 00:12:55.892 "nvme_io": false 00:12:55.892 }, 00:12:55.892 "memory_domains": [ 00:12:55.892 { 00:12:55.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.892 "dma_device_type": 2 00:12:55.892 } 00:12:55.892 ], 00:12:55.892 "driver_specific": {} 00:12:55.892 } 00:12:55.892 ] 00:12:55.892 21:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.892 21:09:18 -- common/autotest_common.sh@895 -- # return 0 00:12:55.892 21:09:18 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:12:55.892 21:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.892 21:09:18 -- common/autotest_common.sh@10 -- # set +x 00:12:55.892 21:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.892 21:09:18 -- bdev/blockdev.sh@482 -- # sleep 1 00:12:55.892 21:09:18 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:12:55.892 Running I/O for 5 seconds... 00:12:56.828 Process is existed as continue on error is set. Pid: 124519 00:12:56.828 21:09:19 -- bdev/blockdev.sh@485 -- # kill -0 124519 00:12:56.828 21:09:19 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 124519' 00:12:56.828 21:09:19 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:12:56.828 21:09:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.828 21:09:19 -- common/autotest_common.sh@10 -- # set +x 00:12:56.828 21:09:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.828 21:09:19 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:12:56.828 21:09:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.828 21:09:19 -- common/autotest_common.sh@10 -- # set +x 00:12:56.828 21:09:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.828 21:09:19 -- bdev/blockdev.sh@495 -- # sleep 5 00:12:57.087 Timeout while waiting for response: 00:12:57.087 00:12:57.087 00:13:01.290 00:13:01.290 Latency(us) 00:13:01.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.290 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:01.290 EE_Dev_1 : 0.91 42546.41 166.20 5.51 0.00 373.31 180.60 852.71 00:13:01.290 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:01.290 Dev_2 : 5.00 92485.59 361.27 0.00 0.00 170.33 56.09 24546.21 00:13:01.290 =================================================================================================================== 00:13:01.290 Total : 135032.00 527.47 5.51 0.00 185.97 56.09 24546.21 00:13:01.857 21:09:24 -- bdev/blockdev.sh@497 -- # killprocess 124519 00:13:01.857 21:09:24 -- common/autotest_common.sh@926 -- # '[' -z 124519 ']' 00:13:01.857 21:09:24 -- common/autotest_common.sh@930 -- # kill -0 124519 00:13:01.857 21:09:24 -- common/autotest_common.sh@931 -- # uname 00:13:01.857 21:09:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:01.857 21:09:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124519 00:13:01.857 killing process with pid 124519 00:13:01.857 Received shutdown signal, test time was about 5.000000 seconds 00:13:01.857 00:13:01.857 Latency(us) 00:13:01.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.857 =================================================================================================================== 00:13:01.857 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:01.857 21:09:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:01.857 21:09:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:01.857 21:09:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124519' 00:13:01.857 21:09:24 -- common/autotest_common.sh@945 -- # kill 124519 00:13:01.857 21:09:24 -- common/autotest_common.sh@950 -- # wait 124519 00:13:02.425 Process error testing pid: 124620 00:13:02.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.425 21:09:24 -- bdev/blockdev.sh@501 -- # ERR_PID=124620 00:13:02.425 21:09:24 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:02.425 21:09:24 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 124620' 00:13:02.425 21:09:24 -- bdev/blockdev.sh@503 -- # waitforlisten 124620 00:13:02.425 21:09:24 -- common/autotest_common.sh@819 -- # '[' -z 124620 ']' 00:13:02.425 21:09:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.425 21:09:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:02.425 21:09:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.425 21:09:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:02.425 21:09:24 -- common/autotest_common.sh@10 -- # set +x 00:13:02.426 [2024-06-07 21:09:24.836470] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:02.426 [2024-06-07 21:09:24.836992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124620 ] 00:13:02.426 [2024-06-07 21:09:24.990667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.426 [2024-06-07 21:09:25.061818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.386 21:09:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:03.386 21:09:25 -- common/autotest_common.sh@852 -- # return 0 00:13:03.386 21:09:25 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:03.386 21:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.386 21:09:25 -- common/autotest_common.sh@10 -- # set +x 00:13:03.386 Dev_1 00:13:03.386 21:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.386 21:09:25 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:13:03.386 21:09:25 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:13:03.386 21:09:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:03.386 21:09:25 -- common/autotest_common.sh@889 -- # local i 00:13:03.386 21:09:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:03.386 21:09:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:03.386 21:09:25 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:03.386 21:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.386 21:09:25 -- common/autotest_common.sh@10 -- # set +x 00:13:03.386 21:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.386 21:09:25 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:03.386 21:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.386 21:09:25 -- common/autotest_common.sh@10 -- # set +x 00:13:03.386 [ 00:13:03.386 { 00:13:03.386 "name": "Dev_1", 00:13:03.386 "aliases": [ 00:13:03.386 "0b6035b6-0a4f-4eb6-ab4c-f021d9d5cfc3" 00:13:03.386 ], 00:13:03.386 "product_name": "Malloc disk", 00:13:03.386 "block_size": 512, 00:13:03.386 "num_blocks": 262144, 00:13:03.386 "uuid": "0b6035b6-0a4f-4eb6-ab4c-f021d9d5cfc3", 00:13:03.386 "assigned_rate_limits": { 00:13:03.386 "rw_ios_per_sec": 0, 00:13:03.386 "rw_mbytes_per_sec": 0, 00:13:03.386 "r_mbytes_per_sec": 0, 00:13:03.386 "w_mbytes_per_sec": 0 00:13:03.386 }, 00:13:03.386 "claimed": false, 00:13:03.386 "zoned": false, 00:13:03.386 "supported_io_types": { 00:13:03.386 "read": true, 00:13:03.386 "write": true, 00:13:03.386 "unmap": true, 00:13:03.386 "write_zeroes": true, 00:13:03.386 "flush": true, 00:13:03.386 "reset": true, 00:13:03.386 "compare": false, 00:13:03.386 "compare_and_write": false, 00:13:03.386 "abort": true, 00:13:03.386 "nvme_admin": false, 00:13:03.386 "nvme_io": false 00:13:03.386 }, 00:13:03.386 "memory_domains": [ 00:13:03.386 { 00:13:03.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.386 "dma_device_type": 2 00:13:03.386 } 00:13:03.386 ], 00:13:03.386 "driver_specific": {} 00:13:03.386 } 00:13:03.386 ] 00:13:03.386 21:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.386 21:09:25 -- common/autotest_common.sh@895 -- # return 0 00:13:03.386 21:09:25 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:13:03.386 21:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.386 21:09:25 -- common/autotest_common.sh@10 -- # set +x 00:13:03.386 true 00:13:03.386 21:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.386 21:09:25 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:03.386 21:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.386 21:09:25 -- common/autotest_common.sh@10 -- # set +x 00:13:03.386 Dev_2 00:13:03.386 21:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.386 21:09:25 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:13:03.386 21:09:25 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:13:03.386 21:09:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:03.386 21:09:25 -- common/autotest_common.sh@889 -- # local i 00:13:03.386 21:09:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:03.386 21:09:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:03.386 21:09:25 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:03.386 21:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.386 21:09:25 -- common/autotest_common.sh@10 -- # set +x 00:13:03.386 21:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.386 21:09:25 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:03.386 21:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.386 21:09:25 -- common/autotest_common.sh@10 -- # set +x 00:13:03.386 [ 00:13:03.386 { 00:13:03.386 "name": "Dev_2", 00:13:03.386 "aliases": [ 00:13:03.386 "ac68ef6e-63d9-4777-bd77-e27ada3f1fc5" 00:13:03.386 ], 00:13:03.386 "product_name": "Malloc disk", 00:13:03.386 "block_size": 512, 00:13:03.386 "num_blocks": 262144, 00:13:03.386 "uuid": "ac68ef6e-63d9-4777-bd77-e27ada3f1fc5", 00:13:03.386 "assigned_rate_limits": { 00:13:03.386 "rw_ios_per_sec": 0, 00:13:03.386 "rw_mbytes_per_sec": 0, 00:13:03.386 "r_mbytes_per_sec": 0, 00:13:03.386 "w_mbytes_per_sec": 0 00:13:03.386 }, 00:13:03.386 "claimed": false, 00:13:03.386 "zoned": false, 00:13:03.386 "supported_io_types": { 00:13:03.386 "read": true, 00:13:03.386 "write": true, 00:13:03.386 "unmap": true, 00:13:03.386 "write_zeroes": true, 00:13:03.386 "flush": true, 00:13:03.386 "reset": true, 00:13:03.386 "compare": false, 00:13:03.386 "compare_and_write": false, 00:13:03.386 "abort": true, 00:13:03.386 "nvme_admin": false, 00:13:03.386 "nvme_io": false 00:13:03.386 }, 00:13:03.386 "memory_domains": [ 00:13:03.386 { 00:13:03.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.386 "dma_device_type": 2 00:13:03.386 } 00:13:03.386 ], 00:13:03.386 "driver_specific": {} 00:13:03.386 } 00:13:03.386 ] 00:13:03.386 21:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.386 21:09:25 -- common/autotest_common.sh@895 -- # return 0 00:13:03.386 21:09:25 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:03.386 21:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.386 21:09:25 -- common/autotest_common.sh@10 -- # set +x 00:13:03.386 21:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.386 21:09:25 -- bdev/blockdev.sh@513 -- # NOT wait 124620 00:13:03.386 21:09:25 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:03.386 21:09:25 -- common/autotest_common.sh@640 -- # local es=0 00:13:03.386 21:09:25 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 124620 00:13:03.386 21:09:25 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:03.386 21:09:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:03.386 21:09:25 -- common/autotest_common.sh@632 -- # type -t wait 00:13:03.386 21:09:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:03.386 21:09:25 -- common/autotest_common.sh@643 -- # wait 124620 00:13:03.386 Running I/O for 5 seconds... 00:13:03.386 task offset: 167336 on job bdev=EE_Dev_1 fails 00:13:03.386 00:13:03.387 Latency(us) 00:13:03.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.387 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:03.387 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:03.387 EE_Dev_1 : 0.00 25200.46 98.44 5727.38 0.00 428.24 153.60 767.07 00:13:03.387 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:03.387 Dev_2 : 0.00 18572.26 72.55 0.00 0.00 599.04 169.43 1087.30 00:13:03.387 =================================================================================================================== 00:13:03.387 Total : 43772.72 170.99 5727.38 0.00 520.88 153.60 1087.30 00:13:03.387 [2024-06-07 21:09:26.008167] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:03.387 request: 00:13:03.387 { 00:13:03.387 "method": "perform_tests", 00:13:03.387 "req_id": 1 00:13:03.387 } 00:13:03.387 Got JSON-RPC error response 00:13:03.387 response: 00:13:03.387 { 00:13:03.387 "code": -32603, 00:13:03.387 "message": "bdevperf failed with error Operation not permitted" 00:13:03.387 } 00:13:03.955 ************************************ 00:13:03.955 END TEST bdev_error 00:13:03.955 ************************************ 00:13:03.955 21:09:26 -- common/autotest_common.sh@643 -- # es=255 00:13:03.955 21:09:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:03.955 21:09:26 -- common/autotest_common.sh@652 -- # es=127 00:13:03.955 21:09:26 -- common/autotest_common.sh@653 -- # case "$es" in 00:13:03.955 21:09:26 -- common/autotest_common.sh@660 -- # es=1 00:13:03.955 21:09:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:03.955 00:13:03.955 real 0m9.155s 00:13:03.955 user 0m9.479s 00:13:03.955 sys 0m0.694s 00:13:03.955 21:09:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.955 21:09:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.955 21:09:26 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:13:03.955 21:09:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:03.955 21:09:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:03.955 21:09:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.955 ************************************ 00:13:03.955 START TEST bdev_stat 00:13:03.955 ************************************ 00:13:03.955 21:09:26 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:13:03.955 21:09:26 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:13:03.955 21:09:26 -- bdev/blockdev.sh@594 -- # STAT_PID=124666 00:13:03.955 21:09:26 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:03.955 Process Bdev IO statistics testing pid: 124666 00:13:03.955 21:09:26 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 124666' 00:13:03.955 21:09:26 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:03.955 21:09:26 -- bdev/blockdev.sh@597 -- # waitforlisten 124666 00:13:03.955 21:09:26 -- common/autotest_common.sh@819 -- # '[' -z 124666 ']' 00:13:03.955 21:09:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.955 21:09:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:03.955 21:09:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.955 21:09:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:03.955 21:09:26 -- common/autotest_common.sh@10 -- # set +x 00:13:03.955 [2024-06-07 21:09:26.474330] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:03.955 [2024-06-07 21:09:26.474725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124666 ] 00:13:04.214 [2024-06-07 21:09:26.638015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:04.214 [2024-06-07 21:09:26.726168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.214 [2024-06-07 21:09:26.726181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.780 21:09:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:04.780 21:09:27 -- common/autotest_common.sh@852 -- # return 0 00:13:04.780 21:09:27 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:04.780 21:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.780 21:09:27 -- common/autotest_common.sh@10 -- # set +x 00:13:05.038 Malloc_STAT 00:13:05.038 21:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.038 21:09:27 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:13:05.038 21:09:27 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:13:05.038 21:09:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:05.038 21:09:27 -- common/autotest_common.sh@889 -- # local i 00:13:05.038 21:09:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:05.038 21:09:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:05.038 21:09:27 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:05.038 21:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.038 21:09:27 -- common/autotest_common.sh@10 -- # set +x 00:13:05.038 21:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.038 21:09:27 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:05.038 21:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.038 21:09:27 -- common/autotest_common.sh@10 -- # set +x 00:13:05.038 [ 00:13:05.038 { 00:13:05.038 "name": "Malloc_STAT", 00:13:05.038 "aliases": [ 00:13:05.038 "9cc73c8b-ba1e-4776-b5fa-b1b833c2a4b2" 00:13:05.038 ], 00:13:05.038 "product_name": "Malloc disk", 00:13:05.038 "block_size": 512, 00:13:05.038 "num_blocks": 262144, 00:13:05.038 "uuid": "9cc73c8b-ba1e-4776-b5fa-b1b833c2a4b2", 00:13:05.038 "assigned_rate_limits": { 00:13:05.038 "rw_ios_per_sec": 0, 00:13:05.038 "rw_mbytes_per_sec": 0, 00:13:05.038 "r_mbytes_per_sec": 0, 00:13:05.038 "w_mbytes_per_sec": 0 00:13:05.038 }, 00:13:05.038 "claimed": false, 00:13:05.038 "zoned": false, 00:13:05.038 "supported_io_types": { 00:13:05.038 "read": true, 00:13:05.038 "write": true, 00:13:05.038 "unmap": true, 00:13:05.038 "write_zeroes": true, 00:13:05.038 "flush": true, 00:13:05.038 "reset": true, 00:13:05.038 "compare": false, 00:13:05.038 "compare_and_write": false, 00:13:05.038 "abort": true, 00:13:05.038 "nvme_admin": false, 00:13:05.038 "nvme_io": false 00:13:05.038 }, 00:13:05.038 "memory_domains": [ 00:13:05.038 { 00:13:05.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.038 "dma_device_type": 2 00:13:05.038 } 00:13:05.038 ], 00:13:05.038 "driver_specific": {} 00:13:05.038 } 00:13:05.038 ] 00:13:05.038 21:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.038 21:09:27 -- common/autotest_common.sh@895 -- # return 0 00:13:05.038 21:09:27 -- bdev/blockdev.sh@603 -- # sleep 2 00:13:05.038 21:09:27 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:05.038 Running I/O for 10 seconds... 00:13:06.940 21:09:29 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:13:06.940 21:09:29 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:13:06.940 21:09:29 -- bdev/blockdev.sh@558 -- # local iostats 00:13:06.940 21:09:29 -- bdev/blockdev.sh@559 -- # local io_count1 00:13:06.940 21:09:29 -- bdev/blockdev.sh@560 -- # local io_count2 00:13:06.940 21:09:29 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:13:06.940 21:09:29 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:13:06.940 21:09:29 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:13:06.940 21:09:29 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:13:06.940 21:09:29 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:06.940 21:09:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.940 21:09:29 -- common/autotest_common.sh@10 -- # set +x 00:13:06.940 21:09:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.940 21:09:29 -- bdev/blockdev.sh@566 -- # iostats='{ 00:13:06.940 "tick_rate": 2200000000, 00:13:06.940 "ticks": 1601537666970, 00:13:06.940 "bdevs": [ 00:13:06.940 { 00:13:06.940 "name": "Malloc_STAT", 00:13:06.940 "bytes_read": 911249920, 00:13:06.940 "num_read_ops": 222467, 00:13:06.940 "bytes_written": 0, 00:13:06.940 "num_write_ops": 0, 00:13:06.940 "bytes_unmapped": 0, 00:13:06.940 "num_unmap_ops": 0, 00:13:06.940 "bytes_copied": 0, 00:13:06.940 "num_copy_ops": 0, 00:13:06.940 "read_latency_ticks": 2167229107753, 00:13:06.940 "max_read_latency_ticks": 14060694, 00:13:06.940 "min_read_latency_ticks": 379684, 00:13:06.940 "write_latency_ticks": 0, 00:13:06.940 "max_write_latency_ticks": 0, 00:13:06.940 "min_write_latency_ticks": 0, 00:13:06.940 "unmap_latency_ticks": 0, 00:13:06.940 "max_unmap_latency_ticks": 0, 00:13:06.940 "min_unmap_latency_ticks": 0, 00:13:06.940 "copy_latency_ticks": 0, 00:13:06.940 "max_copy_latency_ticks": 0, 00:13:06.940 "min_copy_latency_ticks": 0, 00:13:06.940 "io_error": {} 00:13:06.940 } 00:13:06.940 ] 00:13:06.940 }' 00:13:06.940 21:09:29 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:13:06.940 21:09:29 -- bdev/blockdev.sh@567 -- # io_count1=222467 00:13:06.940 21:09:29 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:06.940 21:09:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.940 21:09:29 -- common/autotest_common.sh@10 -- # set +x 00:13:06.940 21:09:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.940 21:09:29 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:13:06.940 "tick_rate": 2200000000, 00:13:06.940 "ticks": 1601710595525, 00:13:06.940 "name": "Malloc_STAT", 00:13:06.940 "channels": [ 00:13:06.940 { 00:13:06.940 "thread_id": 2, 00:13:06.940 "bytes_read": 467664896, 00:13:06.940 "num_read_ops": 114176, 00:13:06.940 "bytes_written": 0, 00:13:06.940 "num_write_ops": 0, 00:13:06.940 "bytes_unmapped": 0, 00:13:06.940 "num_unmap_ops": 0, 00:13:06.940 "bytes_copied": 0, 00:13:06.940 "num_copy_ops": 0, 00:13:06.940 "read_latency_ticks": 1126955897266, 00:13:06.940 "max_read_latency_ticks": 14060694, 00:13:06.940 "min_read_latency_ticks": 7692378, 00:13:06.940 "write_latency_ticks": 0, 00:13:06.940 "max_write_latency_ticks": 0, 00:13:06.940 "min_write_latency_ticks": 0, 00:13:06.940 "unmap_latency_ticks": 0, 00:13:06.940 "max_unmap_latency_ticks": 0, 00:13:06.940 "min_unmap_latency_ticks": 0, 00:13:06.940 "copy_latency_ticks": 0, 00:13:06.940 "max_copy_latency_ticks": 0, 00:13:06.940 "min_copy_latency_ticks": 0 00:13:06.940 }, 00:13:06.940 { 00:13:06.940 "thread_id": 3, 00:13:06.940 "bytes_read": 480247808, 00:13:06.940 "num_read_ops": 117248, 00:13:06.940 "bytes_written": 0, 00:13:06.940 "num_write_ops": 0, 00:13:06.940 "bytes_unmapped": 0, 00:13:06.940 "num_unmap_ops": 0, 00:13:06.940 "bytes_copied": 0, 00:13:06.940 "num_copy_ops": 0, 00:13:06.940 "read_latency_ticks": 1129459666119, 00:13:06.940 "max_read_latency_ticks": 11299233, 00:13:06.940 "min_read_latency_ticks": 7706405, 00:13:06.940 "write_latency_ticks": 0, 00:13:06.940 "max_write_latency_ticks": 0, 00:13:06.940 "min_write_latency_ticks": 0, 00:13:06.940 "unmap_latency_ticks": 0, 00:13:06.940 "max_unmap_latency_ticks": 0, 00:13:06.940 "min_unmap_latency_ticks": 0, 00:13:06.940 "copy_latency_ticks": 0, 00:13:06.940 "max_copy_latency_ticks": 0, 00:13:06.940 "min_copy_latency_ticks": 0 00:13:06.940 } 00:13:06.940 ] 00:13:06.940 }' 00:13:06.940 21:09:29 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:13:07.198 21:09:29 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=114176 00:13:07.198 21:09:29 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=114176 00:13:07.198 21:09:29 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:13:07.198 21:09:29 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=117248 00:13:07.198 21:09:29 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=231424 00:13:07.198 21:09:29 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:07.198 21:09:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.198 21:09:29 -- common/autotest_common.sh@10 -- # set +x 00:13:07.198 21:09:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.198 21:09:29 -- bdev/blockdev.sh@575 -- # iostats='{ 00:13:07.198 "tick_rate": 2200000000, 00:13:07.198 "ticks": 1602014651626, 00:13:07.198 "bdevs": [ 00:13:07.198 { 00:13:07.199 "name": "Malloc_STAT", 00:13:07.199 "bytes_read": 1010864640, 00:13:07.199 "num_read_ops": 246787, 00:13:07.199 "bytes_written": 0, 00:13:07.199 "num_write_ops": 0, 00:13:07.199 "bytes_unmapped": 0, 00:13:07.199 "num_unmap_ops": 0, 00:13:07.199 "bytes_copied": 0, 00:13:07.199 "num_copy_ops": 0, 00:13:07.199 "read_latency_ticks": 2409892014679, 00:13:07.199 "max_read_latency_ticks": 14060694, 00:13:07.199 "min_read_latency_ticks": 379684, 00:13:07.199 "write_latency_ticks": 0, 00:13:07.199 "max_write_latency_ticks": 0, 00:13:07.199 "min_write_latency_ticks": 0, 00:13:07.199 "unmap_latency_ticks": 0, 00:13:07.199 "max_unmap_latency_ticks": 0, 00:13:07.199 "min_unmap_latency_ticks": 0, 00:13:07.199 "copy_latency_ticks": 0, 00:13:07.199 "max_copy_latency_ticks": 0, 00:13:07.199 "min_copy_latency_ticks": 0, 00:13:07.199 "io_error": {} 00:13:07.199 } 00:13:07.199 ] 00:13:07.199 }' 00:13:07.199 21:09:29 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:13:07.199 21:09:29 -- bdev/blockdev.sh@576 -- # io_count2=246787 00:13:07.199 21:09:29 -- bdev/blockdev.sh@581 -- # '[' 231424 -lt 222467 ']' 00:13:07.199 21:09:29 -- bdev/blockdev.sh@581 -- # '[' 231424 -gt 246787 ']' 00:13:07.199 21:09:29 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:07.199 21:09:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.199 21:09:29 -- common/autotest_common.sh@10 -- # set +x 00:13:07.199 00:13:07.199 Latency(us) 00:13:07.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.199 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:07.199 Malloc_STAT : 2.23 56795.58 221.86 0.00 0.00 4497.08 1124.54 6404.65 00:13:07.199 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:07.199 Malloc_STAT : 2.23 58248.92 227.53 0.00 0.00 4385.00 904.84 5153.51 00:13:07.199 =================================================================================================================== 00:13:07.199 Total : 115044.51 449.39 0.00 0.00 4440.31 904.84 6404.65 00:13:07.199 0 00:13:07.199 21:09:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.199 21:09:29 -- bdev/blockdev.sh@607 -- # killprocess 124666 00:13:07.199 21:09:29 -- common/autotest_common.sh@926 -- # '[' -z 124666 ']' 00:13:07.199 21:09:29 -- common/autotest_common.sh@930 -- # kill -0 124666 00:13:07.199 21:09:29 -- common/autotest_common.sh@931 -- # uname 00:13:07.199 21:09:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:07.199 21:09:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124666 00:13:07.199 21:09:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:07.199 21:09:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:07.199 21:09:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124666' 00:13:07.199 killing process with pid 124666 00:13:07.199 21:09:29 -- common/autotest_common.sh@945 -- # kill 124666 00:13:07.199 Received shutdown signal, test time was about 2.286553 seconds 00:13:07.199 00:13:07.199 Latency(us) 00:13:07.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.199 =================================================================================================================== 00:13:07.199 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:07.199 21:09:29 -- common/autotest_common.sh@950 -- # wait 124666 00:13:07.457 ************************************ 00:13:07.457 END TEST bdev_stat 00:13:07.457 ************************************ 00:13:07.457 21:09:30 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:13:07.457 00:13:07.457 real 0m3.697s 00:13:07.457 user 0m7.329s 00:13:07.457 sys 0m0.354s 00:13:07.458 21:09:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.458 21:09:30 -- common/autotest_common.sh@10 -- # set +x 00:13:07.716 21:09:30 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:13:07.716 21:09:30 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:13:07.716 21:09:30 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:07.716 21:09:30 -- bdev/blockdev.sh@809 -- # cleanup 00:13:07.716 21:09:30 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:07.716 21:09:30 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:07.716 21:09:30 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:13:07.716 21:09:30 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:13:07.716 21:09:30 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:13:07.716 21:09:30 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:13:07.716 ************************************ 00:13:07.716 END TEST blockdev_general 00:13:07.716 ************************************ 00:13:07.716 00:13:07.716 real 1m57.139s 00:13:07.716 user 5m17.669s 00:13:07.716 sys 0m20.528s 00:13:07.716 21:09:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.716 21:09:30 -- common/autotest_common.sh@10 -- # set +x 00:13:07.716 21:09:30 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:07.716 21:09:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:07.716 21:09:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:07.716 21:09:30 -- common/autotest_common.sh@10 -- # set +x 00:13:07.716 ************************************ 00:13:07.716 START TEST bdev_raid 00:13:07.716 ************************************ 00:13:07.716 21:09:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:07.716 * Looking for test storage... 00:13:07.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:07.716 21:09:30 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:07.716 21:09:30 -- bdev/nbd_common.sh@6 -- # set -e 00:13:07.716 21:09:30 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:07.716 21:09:30 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:13:07.716 21:09:30 -- bdev/bdev_raid.sh@716 -- # uname -s 00:13:07.716 21:09:30 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:13:07.716 21:09:30 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:13:07.716 21:09:30 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:13:07.716 21:09:30 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:13:07.716 21:09:30 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:07.716 21:09:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:07.716 21:09:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:07.716 21:09:30 -- common/autotest_common.sh@10 -- # set +x 00:13:07.716 ************************************ 00:13:07.716 START TEST raid_function_test_raid0 00:13:07.716 ************************************ 00:13:07.716 21:09:30 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:13:07.716 21:09:30 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:13:07.716 21:09:30 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:07.717 21:09:30 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:07.717 21:09:30 -- bdev/bdev_raid.sh@86 -- # raid_pid=124832 00:13:07.717 21:09:30 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:07.717 Process raid pid: 124832 00:13:07.717 21:09:30 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 124832' 00:13:07.717 21:09:30 -- bdev/bdev_raid.sh@88 -- # waitforlisten 124832 /var/tmp/spdk-raid.sock 00:13:07.717 21:09:30 -- common/autotest_common.sh@819 -- # '[' -z 124832 ']' 00:13:07.717 21:09:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:07.717 21:09:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:07.717 21:09:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:07.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:07.717 21:09:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:07.717 21:09:30 -- common/autotest_common.sh@10 -- # set +x 00:13:07.717 [2024-06-07 21:09:30.380110] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:07.717 [2024-06-07 21:09:30.380571] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.976 [2024-06-07 21:09:30.534916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.976 [2024-06-07 21:09:30.628672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.234 [2024-06-07 21:09:30.686317] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.801 21:09:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:08.801 21:09:31 -- common/autotest_common.sh@852 -- # return 0 00:13:08.801 21:09:31 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:13:08.801 21:09:31 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:13:08.801 21:09:31 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:08.801 21:09:31 -- bdev/bdev_raid.sh@70 -- # cat 00:13:08.801 21:09:31 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:09.059 [2024-06-07 21:09:31.619904] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:09.059 [2024-06-07 21:09:31.622521] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:09.059 [2024-06-07 21:09:31.622769] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:13:09.059 [2024-06-07 21:09:31.622891] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:09.059 [2024-06-07 21:09:31.623146] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:13:09.059 [2024-06-07 21:09:31.623635] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:13:09.059 [2024-06-07 21:09:31.623800] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:13:09.059 [2024-06-07 21:09:31.624125] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.059 Base_1 00:13:09.059 Base_2 00:13:09.059 21:09:31 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:09.059 21:09:31 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:09.059 21:09:31 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:09.321 21:09:31 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:09.321 21:09:31 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:09.321 21:09:31 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:09.321 21:09:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:09.321 21:09:31 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:13:09.321 21:09:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.321 21:09:31 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:13:09.322 21:09:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.322 21:09:31 -- bdev/nbd_common.sh@12 -- # local i 00:13:09.322 21:09:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.322 21:09:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.322 21:09:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:09.580 [2024-06-07 21:09:32.060305] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:13:09.580 /dev/nbd0 00:13:09.580 21:09:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:09.580 21:09:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:09.580 21:09:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:09.580 21:09:32 -- common/autotest_common.sh@857 -- # local i 00:13:09.580 21:09:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:09.580 21:09:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:09.580 21:09:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:09.580 21:09:32 -- common/autotest_common.sh@861 -- # break 00:13:09.580 21:09:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:09.580 21:09:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:09.581 21:09:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.581 1+0 records in 00:13:09.581 1+0 records out 00:13:09.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387439 s, 10.6 MB/s 00:13:09.581 21:09:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.581 21:09:32 -- common/autotest_common.sh@874 -- # size=4096 00:13:09.581 21:09:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.581 21:09:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:09.581 21:09:32 -- common/autotest_common.sh@877 -- # return 0 00:13:09.581 21:09:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.581 21:09:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.581 21:09:32 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:09.581 21:09:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:09.581 21:09:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:09.839 21:09:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:09.839 { 00:13:09.839 "nbd_device": "/dev/nbd0", 00:13:09.839 "bdev_name": "raid" 00:13:09.839 } 00:13:09.839 ]' 00:13:09.839 21:09:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:09.839 { 00:13:09.839 "nbd_device": "/dev/nbd0", 00:13:09.839 "bdev_name": "raid" 00:13:09.839 } 00:13:09.839 ]' 00:13:09.839 21:09:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:09.839 21:09:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:09.839 21:09:32 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:09.839 21:09:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:09.839 21:09:32 -- bdev/nbd_common.sh@65 -- # count=1 00:13:09.839 21:09:32 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:09.839 21:09:32 -- bdev/bdev_raid.sh@98 -- # count=1 00:13:09.839 21:09:32 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:13:09.839 21:09:32 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:09.839 21:09:32 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:09.839 21:09:32 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@20 -- # local blksize 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:13:09.840 4096+0 records in 00:13:09.840 4096+0 records out 00:13:09.840 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0382601 s, 54.8 MB/s 00:13:09.840 21:09:32 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:10.098 4096+0 records in 00:13:10.098 4096+0 records out 00:13:10.098 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.267996 s, 7.8 MB/s 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:10.098 128+0 records in 00:13:10.098 128+0 records out 00:13:10.098 65536 bytes (66 kB, 64 KiB) copied, 0.000309687 s, 212 MB/s 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:10.098 2035+0 records in 00:13:10.098 2035+0 records out 00:13:10.098 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0065785 s, 158 MB/s 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:10.098 456+0 records in 00:13:10.098 456+0 records out 00:13:10.098 233472 bytes (233 kB, 228 KiB) copied, 0.00197859 s, 118 MB/s 00:13:10.098 21:09:32 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:10.357 21:09:32 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:10.357 21:09:32 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:10.357 21:09:32 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:10.357 21:09:32 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:10.357 21:09:32 -- bdev/bdev_raid.sh@53 -- # return 0 00:13:10.357 21:09:32 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:10.357 21:09:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:10.357 21:09:32 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:13:10.357 21:09:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.357 21:09:32 -- bdev/nbd_common.sh@51 -- # local i 00:13:10.357 21:09:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.357 21:09:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:10.357 21:09:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:10.357 21:09:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:10.357 21:09:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:10.357 21:09:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.357 21:09:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.357 21:09:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:10.357 [2024-06-07 21:09:33.001761] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.357 21:09:32 -- bdev/nbd_common.sh@41 -- # break 00:13:10.357 21:09:32 -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.357 21:09:32 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:10.357 21:09:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:10.357 21:09:33 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:10.620 21:09:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:10.620 21:09:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:10.620 21:09:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:10.620 21:09:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:10.620 21:09:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:10.620 21:09:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:10.620 21:09:33 -- bdev/nbd_common.sh@65 -- # true 00:13:10.620 21:09:33 -- bdev/nbd_common.sh@65 -- # count=0 00:13:10.620 21:09:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:10.620 21:09:33 -- bdev/bdev_raid.sh@106 -- # count=0 00:13:10.620 21:09:33 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:13:10.620 21:09:33 -- bdev/bdev_raid.sh@111 -- # killprocess 124832 00:13:10.620 21:09:33 -- common/autotest_common.sh@926 -- # '[' -z 124832 ']' 00:13:10.620 21:09:33 -- common/autotest_common.sh@930 -- # kill -0 124832 00:13:10.621 21:09:33 -- common/autotest_common.sh@931 -- # uname 00:13:10.883 21:09:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:10.883 21:09:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124832 00:13:10.883 21:09:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:10.883 killing process with pid 124832 00:13:10.883 21:09:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:10.883 21:09:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124832' 00:13:10.883 21:09:33 -- common/autotest_common.sh@945 -- # kill 124832 00:13:10.883 21:09:33 -- common/autotest_common.sh@950 -- # wait 124832 00:13:10.883 [2024-06-07 21:09:33.309018] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:10.883 [2024-06-07 21:09:33.309195] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.883 [2024-06-07 21:09:33.309295] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.883 [2024-06-07 21:09:33.309309] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:13:10.883 [2024-06-07 21:09:33.330257] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.141 21:09:33 -- bdev/bdev_raid.sh@113 -- # return 0 00:13:11.141 ************************************ 00:13:11.141 END TEST raid_function_test_raid0 00:13:11.141 ************************************ 00:13:11.141 00:13:11.141 real 0m3.244s 00:13:11.141 user 0m4.488s 00:13:11.141 sys 0m0.807s 00:13:11.141 21:09:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.141 21:09:33 -- common/autotest_common.sh@10 -- # set +x 00:13:11.141 21:09:33 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:13:11.141 21:09:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:11.141 21:09:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:11.141 21:09:33 -- common/autotest_common.sh@10 -- # set +x 00:13:11.141 ************************************ 00:13:11.141 START TEST raid_function_test_concat 00:13:11.141 ************************************ 00:13:11.141 21:09:33 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:13:11.141 21:09:33 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:13:11.141 21:09:33 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:11.141 21:09:33 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:11.141 21:09:33 -- bdev/bdev_raid.sh@86 -- # raid_pid=124973 00:13:11.141 21:09:33 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 124973' 00:13:11.141 Process raid pid: 124973 00:13:11.141 21:09:33 -- bdev/bdev_raid.sh@88 -- # waitforlisten 124973 /var/tmp/spdk-raid.sock 00:13:11.141 21:09:33 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:11.141 21:09:33 -- common/autotest_common.sh@819 -- # '[' -z 124973 ']' 00:13:11.141 21:09:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:11.141 21:09:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:11.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:11.141 21:09:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:11.141 21:09:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:11.141 21:09:33 -- common/autotest_common.sh@10 -- # set +x 00:13:11.141 [2024-06-07 21:09:33.673924] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:11.142 [2024-06-07 21:09:33.674136] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.400 [2024-06-07 21:09:33.822928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.400 [2024-06-07 21:09:33.900346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.400 [2024-06-07 21:09:33.953479] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.967 21:09:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:11.967 21:09:34 -- common/autotest_common.sh@852 -- # return 0 00:13:11.967 21:09:34 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:13:11.967 21:09:34 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:13:11.967 21:09:34 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:11.967 21:09:34 -- bdev/bdev_raid.sh@70 -- # cat 00:13:11.967 21:09:34 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:12.226 [2024-06-07 21:09:34.890548] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:12.226 [2024-06-07 21:09:34.892682] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:12.226 [2024-06-07 21:09:34.892767] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:13:12.226 [2024-06-07 21:09:34.892779] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:12.226 [2024-06-07 21:09:34.893001] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:13:12.226 [2024-06-07 21:09:34.893435] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:13:12.226 [2024-06-07 21:09:34.893475] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:13:12.226 [2024-06-07 21:09:34.893680] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.226 Base_1 00:13:12.226 Base_2 00:13:12.486 21:09:34 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:12.486 21:09:34 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:12.486 21:09:34 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:12.486 21:09:35 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:12.486 21:09:35 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:12.486 21:09:35 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:12.486 21:09:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:12.486 21:09:35 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:13:12.486 21:09:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:12.486 21:09:35 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:13:12.486 21:09:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:12.486 21:09:35 -- bdev/nbd_common.sh@12 -- # local i 00:13:12.486 21:09:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:12.486 21:09:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:12.486 21:09:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:12.746 [2024-06-07 21:09:35.346739] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:13:12.746 /dev/nbd0 00:13:12.746 21:09:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:12.746 21:09:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:12.746 21:09:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:12.746 21:09:35 -- common/autotest_common.sh@857 -- # local i 00:13:12.746 21:09:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:12.746 21:09:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:12.746 21:09:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:12.746 21:09:35 -- common/autotest_common.sh@861 -- # break 00:13:12.746 21:09:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:12.746 21:09:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:12.746 21:09:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.746 1+0 records in 00:13:12.746 1+0 records out 00:13:12.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322144 s, 12.7 MB/s 00:13:12.746 21:09:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.746 21:09:35 -- common/autotest_common.sh@874 -- # size=4096 00:13:12.746 21:09:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.746 21:09:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:12.746 21:09:35 -- common/autotest_common.sh@877 -- # return 0 00:13:12.746 21:09:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.746 21:09:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:12.746 21:09:35 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:12.746 21:09:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:12.746 21:09:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:13.005 21:09:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:13.005 { 00:13:13.005 "nbd_device": "/dev/nbd0", 00:13:13.005 "bdev_name": "raid" 00:13:13.005 } 00:13:13.005 ]' 00:13:13.005 21:09:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:13.005 { 00:13:13.005 "nbd_device": "/dev/nbd0", 00:13:13.005 "bdev_name": "raid" 00:13:13.005 } 00:13:13.005 ]' 00:13:13.005 21:09:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:13.263 21:09:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:13.263 21:09:35 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:13.263 21:09:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:13.263 21:09:35 -- bdev/nbd_common.sh@65 -- # count=1 00:13:13.263 21:09:35 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@98 -- # count=1 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@20 -- # local blksize 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:13:13.263 21:09:35 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:13:13.264 4096+0 records in 00:13:13.264 4096+0 records out 00:13:13.264 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0263687 s, 79.5 MB/s 00:13:13.264 21:09:35 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:13.522 4096+0 records in 00:13:13.522 4096+0 records out 00:13:13.522 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.264211 s, 7.9 MB/s 00:13:13.522 21:09:35 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:13:13.522 21:09:35 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:13.522 128+0 records in 00:13:13.522 128+0 records out 00:13:13.522 65536 bytes (66 kB, 64 KiB) copied, 0.000465897 s, 141 MB/s 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:13.522 2035+0 records in 00:13:13.522 2035+0 records out 00:13:13.522 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00503304 s, 207 MB/s 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:13.522 456+0 records in 00:13:13.522 456+0 records out 00:13:13.522 233472 bytes (233 kB, 228 KiB) copied, 0.00193454 s, 121 MB/s 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@53 -- # return 0 00:13:13.522 21:09:36 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:13.522 21:09:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:13.522 21:09:36 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:13:13.523 21:09:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:13.523 21:09:36 -- bdev/nbd_common.sh@51 -- # local i 00:13:13.523 21:09:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.523 21:09:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:13.781 21:09:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:13.781 21:09:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:13.781 21:09:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:13.781 21:09:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.781 21:09:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.781 21:09:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:13.781 [2024-06-07 21:09:36.333822] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.781 21:09:36 -- bdev/nbd_common.sh@41 -- # break 00:13:13.781 21:09:36 -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.781 21:09:36 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:13.781 21:09:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:13.781 21:09:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:14.040 21:09:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:14.040 21:09:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:14.040 21:09:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:14.040 21:09:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:14.040 21:09:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:14.040 21:09:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:14.040 21:09:36 -- bdev/nbd_common.sh@65 -- # true 00:13:14.040 21:09:36 -- bdev/nbd_common.sh@65 -- # count=0 00:13:14.040 21:09:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:14.040 21:09:36 -- bdev/bdev_raid.sh@106 -- # count=0 00:13:14.040 21:09:36 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:13:14.040 21:09:36 -- bdev/bdev_raid.sh@111 -- # killprocess 124973 00:13:14.040 21:09:36 -- common/autotest_common.sh@926 -- # '[' -z 124973 ']' 00:13:14.040 21:09:36 -- common/autotest_common.sh@930 -- # kill -0 124973 00:13:14.040 21:09:36 -- common/autotest_common.sh@931 -- # uname 00:13:14.040 21:09:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:14.040 21:09:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124973 00:13:14.040 21:09:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:14.040 killing process with pid 124973 00:13:14.040 21:09:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:14.040 21:09:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124973' 00:13:14.040 21:09:36 -- common/autotest_common.sh@945 -- # kill 124973 00:13:14.040 21:09:36 -- common/autotest_common.sh@950 -- # wait 124973 00:13:14.040 [2024-06-07 21:09:36.678446] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.040 [2024-06-07 21:09:36.678575] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.040 [2024-06-07 21:09:36.678657] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.040 [2024-06-07 21:09:36.678670] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:13:14.040 [2024-06-07 21:09:36.700505] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.299 21:09:36 -- bdev/bdev_raid.sh@113 -- # return 0 00:13:14.299 00:13:14.299 real 0m3.310s 00:13:14.299 user 0m4.551s 00:13:14.299 sys 0m0.893s 00:13:14.299 ************************************ 00:13:14.299 END TEST raid_function_test_concat 00:13:14.299 ************************************ 00:13:14.299 21:09:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.299 21:09:36 -- common/autotest_common.sh@10 -- # set +x 00:13:14.557 21:09:36 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:13:14.557 21:09:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:14.557 21:09:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:14.557 21:09:36 -- common/autotest_common.sh@10 -- # set +x 00:13:14.557 ************************************ 00:13:14.557 START TEST raid0_resize_test 00:13:14.557 ************************************ 00:13:14.557 21:09:36 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:13:14.557 21:09:36 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:13:14.557 21:09:36 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:13:14.557 21:09:36 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:13:14.557 21:09:36 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:13:14.557 21:09:36 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:13:14.557 21:09:36 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:13:14.557 21:09:36 -- bdev/bdev_raid.sh@301 -- # raid_pid=125115 00:13:14.557 21:09:36 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 125115' 00:13:14.557 Process raid pid: 125115 00:13:14.557 21:09:36 -- bdev/bdev_raid.sh@303 -- # waitforlisten 125115 /var/tmp/spdk-raid.sock 00:13:14.557 21:09:36 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:14.557 21:09:36 -- common/autotest_common.sh@819 -- # '[' -z 125115 ']' 00:13:14.557 21:09:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:14.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:14.557 21:09:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:14.558 21:09:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:14.558 21:09:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:14.558 21:09:36 -- common/autotest_common.sh@10 -- # set +x 00:13:14.558 [2024-06-07 21:09:37.043376] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:14.558 [2024-06-07 21:09:37.043611] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.558 [2024-06-07 21:09:37.204074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.817 [2024-06-07 21:09:37.275092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.817 [2024-06-07 21:09:37.329630] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.410 21:09:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:15.410 21:09:38 -- common/autotest_common.sh@852 -- # return 0 00:13:15.410 21:09:38 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:13:15.673 Base_1 00:13:15.673 21:09:38 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:13:15.932 Base_2 00:13:15.932 21:09:38 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:13:16.191 [2024-06-07 21:09:38.715798] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:16.191 [2024-06-07 21:09:38.717596] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:16.191 [2024-06-07 21:09:38.717671] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:13:16.191 [2024-06-07 21:09:38.717699] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:16.191 [2024-06-07 21:09:38.717917] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005380 00:13:16.191 [2024-06-07 21:09:38.718299] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:13:16.191 [2024-06-07 21:09:38.718321] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000007280 00:13:16.191 [2024-06-07 21:09:38.718521] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.191 21:09:38 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:13:16.449 [2024-06-07 21:09:38.915843] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:16.449 [2024-06-07 21:09:38.915873] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:16.449 true 00:13:16.449 21:09:38 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:16.449 21:09:38 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:13:16.707 [2024-06-07 21:09:39.128058] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.707 21:09:39 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:13:16.707 21:09:39 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:13:16.707 21:09:39 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:13:16.707 21:09:39 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:13:16.966 [2024-06-07 21:09:39.395966] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:16.966 [2024-06-07 21:09:39.396003] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:16.966 [2024-06-07 21:09:39.396081] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:13:16.966 [2024-06-07 21:09:39.396140] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:16.966 true 00:13:16.966 21:09:39 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:16.966 21:09:39 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:13:16.966 [2024-06-07 21:09:39.596087] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.966 21:09:39 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:13:16.966 21:09:39 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:13:16.966 21:09:39 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:13:16.966 21:09:39 -- bdev/bdev_raid.sh@332 -- # killprocess 125115 00:13:16.966 21:09:39 -- common/autotest_common.sh@926 -- # '[' -z 125115 ']' 00:13:16.966 21:09:39 -- common/autotest_common.sh@930 -- # kill -0 125115 00:13:16.966 21:09:39 -- common/autotest_common.sh@931 -- # uname 00:13:16.966 21:09:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:16.966 21:09:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125115 00:13:16.966 21:09:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:16.966 killing process with pid 125115 00:13:16.966 21:09:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:16.966 21:09:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125115' 00:13:16.966 21:09:39 -- common/autotest_common.sh@945 -- # kill 125115 00:13:16.966 21:09:39 -- common/autotest_common.sh@950 -- # wait 125115 00:13:16.966 [2024-06-07 21:09:39.628700] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:16.966 [2024-06-07 21:09:39.628841] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.966 [2024-06-07 21:09:39.628940] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.966 [2024-06-07 21:09:39.628968] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Raid, state offline 00:13:16.966 [2024-06-07 21:09:39.629557] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:17.224 21:09:39 -- bdev/bdev_raid.sh@334 -- # return 0 00:13:17.224 00:13:17.224 real 0m2.876s 00:13:17.224 user 0m4.556s 00:13:17.224 sys 0m0.420s 00:13:17.224 21:09:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.224 ************************************ 00:13:17.224 END TEST raid0_resize_test 00:13:17.224 ************************************ 00:13:17.224 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:13:17.483 21:09:39 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:17.483 21:09:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:17.483 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:13:17.483 ************************************ 00:13:17.483 START TEST raid_state_function_test 00:13:17.483 ************************************ 00:13:17.483 21:09:39 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@226 -- # raid_pid=125212 00:13:17.483 Process raid pid: 125212 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125212' 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125212 /var/tmp/spdk-raid.sock 00:13:17.483 21:09:39 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:17.483 21:09:39 -- common/autotest_common.sh@819 -- # '[' -z 125212 ']' 00:13:17.483 21:09:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:17.483 21:09:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:17.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:17.483 21:09:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:17.483 21:09:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:17.483 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:13:17.483 [2024-06-07 21:09:39.970568] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:17.483 [2024-06-07 21:09:39.971413] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.483 [2024-06-07 21:09:40.136515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.742 [2024-06-07 21:09:40.203502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.743 [2024-06-07 21:09:40.258653] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.310 21:09:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:18.310 21:09:40 -- common/autotest_common.sh@852 -- # return 0 00:13:18.310 21:09:40 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:18.569 [2024-06-07 21:09:41.080198] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:18.569 [2024-06-07 21:09:41.080306] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:18.569 [2024-06-07 21:09:41.080338] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:18.569 [2024-06-07 21:09:41.080358] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:18.569 21:09:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:18.569 21:09:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:18.569 21:09:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:18.569 21:09:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:18.569 21:09:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:18.569 21:09:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:18.569 21:09:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:18.569 21:09:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:18.569 21:09:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:18.569 21:09:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:18.569 21:09:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.569 21:09:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.827 21:09:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:18.827 "name": "Existed_Raid", 00:13:18.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.827 "strip_size_kb": 64, 00:13:18.827 "state": "configuring", 00:13:18.827 "raid_level": "raid0", 00:13:18.827 "superblock": false, 00:13:18.827 "num_base_bdevs": 2, 00:13:18.827 "num_base_bdevs_discovered": 0, 00:13:18.827 "num_base_bdevs_operational": 2, 00:13:18.827 "base_bdevs_list": [ 00:13:18.827 { 00:13:18.827 "name": "BaseBdev1", 00:13:18.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.827 "is_configured": false, 00:13:18.827 "data_offset": 0, 00:13:18.827 "data_size": 0 00:13:18.827 }, 00:13:18.827 { 00:13:18.827 "name": "BaseBdev2", 00:13:18.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.827 "is_configured": false, 00:13:18.827 "data_offset": 0, 00:13:18.827 "data_size": 0 00:13:18.827 } 00:13:18.827 ] 00:13:18.827 }' 00:13:18.827 21:09:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:18.827 21:09:41 -- common/autotest_common.sh@10 -- # set +x 00:13:19.394 21:09:42 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:19.653 [2024-06-07 21:09:42.256233] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:19.653 [2024-06-07 21:09:42.256305] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:13:19.653 21:09:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:19.911 [2024-06-07 21:09:42.464271] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:19.911 [2024-06-07 21:09:42.464376] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:19.911 [2024-06-07 21:09:42.464405] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:19.911 [2024-06-07 21:09:42.464429] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:19.911 21:09:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:20.169 [2024-06-07 21:09:42.679486] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.169 BaseBdev1 00:13:20.169 21:09:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:20.169 21:09:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:20.169 21:09:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:20.169 21:09:42 -- common/autotest_common.sh@889 -- # local i 00:13:20.169 21:09:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:20.169 21:09:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:20.169 21:09:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:20.428 21:09:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:20.685 [ 00:13:20.685 { 00:13:20.685 "name": "BaseBdev1", 00:13:20.685 "aliases": [ 00:13:20.685 "c1f6954c-7ba7-4891-bf4a-e3ce0f2737fb" 00:13:20.685 ], 00:13:20.685 "product_name": "Malloc disk", 00:13:20.685 "block_size": 512, 00:13:20.685 "num_blocks": 65536, 00:13:20.685 "uuid": "c1f6954c-7ba7-4891-bf4a-e3ce0f2737fb", 00:13:20.685 "assigned_rate_limits": { 00:13:20.685 "rw_ios_per_sec": 0, 00:13:20.685 "rw_mbytes_per_sec": 0, 00:13:20.685 "r_mbytes_per_sec": 0, 00:13:20.685 "w_mbytes_per_sec": 0 00:13:20.685 }, 00:13:20.685 "claimed": true, 00:13:20.685 "claim_type": "exclusive_write", 00:13:20.685 "zoned": false, 00:13:20.685 "supported_io_types": { 00:13:20.685 "read": true, 00:13:20.685 "write": true, 00:13:20.685 "unmap": true, 00:13:20.685 "write_zeroes": true, 00:13:20.685 "flush": true, 00:13:20.685 "reset": true, 00:13:20.685 "compare": false, 00:13:20.685 "compare_and_write": false, 00:13:20.685 "abort": true, 00:13:20.685 "nvme_admin": false, 00:13:20.685 "nvme_io": false 00:13:20.685 }, 00:13:20.685 "memory_domains": [ 00:13:20.685 { 00:13:20.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.685 "dma_device_type": 2 00:13:20.685 } 00:13:20.685 ], 00:13:20.685 "driver_specific": {} 00:13:20.685 } 00:13:20.685 ] 00:13:20.685 21:09:43 -- common/autotest_common.sh@895 -- # return 0 00:13:20.685 21:09:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:20.685 21:09:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:20.685 21:09:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:20.685 21:09:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:20.685 21:09:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:20.685 21:09:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:20.685 21:09:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:20.685 21:09:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:20.685 21:09:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:20.685 21:09:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:20.686 21:09:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.686 21:09:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.686 21:09:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:20.686 "name": "Existed_Raid", 00:13:20.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.686 "strip_size_kb": 64, 00:13:20.686 "state": "configuring", 00:13:20.686 "raid_level": "raid0", 00:13:20.686 "superblock": false, 00:13:20.686 "num_base_bdevs": 2, 00:13:20.686 "num_base_bdevs_discovered": 1, 00:13:20.686 "num_base_bdevs_operational": 2, 00:13:20.686 "base_bdevs_list": [ 00:13:20.686 { 00:13:20.686 "name": "BaseBdev1", 00:13:20.686 "uuid": "c1f6954c-7ba7-4891-bf4a-e3ce0f2737fb", 00:13:20.686 "is_configured": true, 00:13:20.686 "data_offset": 0, 00:13:20.686 "data_size": 65536 00:13:20.686 }, 00:13:20.686 { 00:13:20.686 "name": "BaseBdev2", 00:13:20.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.686 "is_configured": false, 00:13:20.686 "data_offset": 0, 00:13:20.686 "data_size": 0 00:13:20.686 } 00:13:20.686 ] 00:13:20.686 }' 00:13:20.686 21:09:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:20.686 21:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:21.621 21:09:44 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:21.621 [2024-06-07 21:09:44.215916] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.621 [2024-06-07 21:09:44.216019] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:13:21.621 21:09:44 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:21.621 21:09:44 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:21.880 [2024-06-07 21:09:44.463983] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.880 [2024-06-07 21:09:44.466226] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.881 [2024-06-07 21:09:44.466300] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.881 21:09:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:21.881 21:09:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:21.881 21:09:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:21.881 21:09:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:21.881 21:09:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:21.881 21:09:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:21.881 21:09:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:21.881 21:09:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:21.881 21:09:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:21.881 21:09:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:21.881 21:09:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:21.881 21:09:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:21.881 21:09:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.881 21:09:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.140 21:09:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:22.140 "name": "Existed_Raid", 00:13:22.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.140 "strip_size_kb": 64, 00:13:22.140 "state": "configuring", 00:13:22.140 "raid_level": "raid0", 00:13:22.140 "superblock": false, 00:13:22.140 "num_base_bdevs": 2, 00:13:22.140 "num_base_bdevs_discovered": 1, 00:13:22.140 "num_base_bdevs_operational": 2, 00:13:22.140 "base_bdevs_list": [ 00:13:22.140 { 00:13:22.140 "name": "BaseBdev1", 00:13:22.140 "uuid": "c1f6954c-7ba7-4891-bf4a-e3ce0f2737fb", 00:13:22.140 "is_configured": true, 00:13:22.140 "data_offset": 0, 00:13:22.140 "data_size": 65536 00:13:22.140 }, 00:13:22.140 { 00:13:22.140 "name": "BaseBdev2", 00:13:22.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.140 "is_configured": false, 00:13:22.140 "data_offset": 0, 00:13:22.140 "data_size": 0 00:13:22.140 } 00:13:22.140 ] 00:13:22.140 }' 00:13:22.140 21:09:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:22.140 21:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:23.076 21:09:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:23.076 [2024-06-07 21:09:45.713438] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.076 [2024-06-07 21:09:45.713515] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:13:23.076 [2024-06-07 21:09:45.713546] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:23.076 [2024-06-07 21:09:45.713808] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:13:23.076 [2024-06-07 21:09:45.714457] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:13:23.076 [2024-06-07 21:09:45.714491] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:13:23.076 [2024-06-07 21:09:45.714942] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.076 BaseBdev2 00:13:23.076 21:09:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:23.076 21:09:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:23.076 21:09:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:23.076 21:09:45 -- common/autotest_common.sh@889 -- # local i 00:13:23.076 21:09:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:23.076 21:09:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:23.076 21:09:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:23.334 21:09:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:23.593 [ 00:13:23.593 { 00:13:23.593 "name": "BaseBdev2", 00:13:23.593 "aliases": [ 00:13:23.593 "db95738a-5086-4de8-8568-622e7abb3fc2" 00:13:23.593 ], 00:13:23.593 "product_name": "Malloc disk", 00:13:23.593 "block_size": 512, 00:13:23.593 "num_blocks": 65536, 00:13:23.593 "uuid": "db95738a-5086-4de8-8568-622e7abb3fc2", 00:13:23.593 "assigned_rate_limits": { 00:13:23.593 "rw_ios_per_sec": 0, 00:13:23.593 "rw_mbytes_per_sec": 0, 00:13:23.593 "r_mbytes_per_sec": 0, 00:13:23.593 "w_mbytes_per_sec": 0 00:13:23.593 }, 00:13:23.593 "claimed": true, 00:13:23.593 "claim_type": "exclusive_write", 00:13:23.593 "zoned": false, 00:13:23.593 "supported_io_types": { 00:13:23.593 "read": true, 00:13:23.593 "write": true, 00:13:23.593 "unmap": true, 00:13:23.593 "write_zeroes": true, 00:13:23.593 "flush": true, 00:13:23.594 "reset": true, 00:13:23.594 "compare": false, 00:13:23.594 "compare_and_write": false, 00:13:23.594 "abort": true, 00:13:23.594 "nvme_admin": false, 00:13:23.594 "nvme_io": false 00:13:23.594 }, 00:13:23.594 "memory_domains": [ 00:13:23.594 { 00:13:23.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.594 "dma_device_type": 2 00:13:23.594 } 00:13:23.594 ], 00:13:23.594 "driver_specific": {} 00:13:23.594 } 00:13:23.594 ] 00:13:23.594 21:09:46 -- common/autotest_common.sh@895 -- # return 0 00:13:23.594 21:09:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:23.594 21:09:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:23.594 21:09:46 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:23.594 21:09:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:23.594 21:09:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:23.594 21:09:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:23.594 21:09:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:23.594 21:09:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:23.594 21:09:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:23.594 21:09:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:23.594 21:09:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:23.594 21:09:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:23.594 21:09:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.594 21:09:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.852 21:09:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:23.852 "name": "Existed_Raid", 00:13:23.852 "uuid": "39b84dbe-c791-4d48-b7c8-ce49cccd4d71", 00:13:23.852 "strip_size_kb": 64, 00:13:23.852 "state": "online", 00:13:23.852 "raid_level": "raid0", 00:13:23.852 "superblock": false, 00:13:23.852 "num_base_bdevs": 2, 00:13:23.852 "num_base_bdevs_discovered": 2, 00:13:23.852 "num_base_bdevs_operational": 2, 00:13:23.852 "base_bdevs_list": [ 00:13:23.852 { 00:13:23.852 "name": "BaseBdev1", 00:13:23.852 "uuid": "c1f6954c-7ba7-4891-bf4a-e3ce0f2737fb", 00:13:23.852 "is_configured": true, 00:13:23.852 "data_offset": 0, 00:13:23.852 "data_size": 65536 00:13:23.852 }, 00:13:23.852 { 00:13:23.852 "name": "BaseBdev2", 00:13:23.852 "uuid": "db95738a-5086-4de8-8568-622e7abb3fc2", 00:13:23.852 "is_configured": true, 00:13:23.852 "data_offset": 0, 00:13:23.852 "data_size": 65536 00:13:23.852 } 00:13:23.852 ] 00:13:23.852 }' 00:13:23.852 21:09:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:23.852 21:09:46 -- common/autotest_common.sh@10 -- # set +x 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:24.789 [2024-06-07 21:09:47.369984] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:24.789 [2024-06-07 21:09:47.370031] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.789 [2024-06-07 21:09:47.370127] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:24.789 21:09:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:24.790 21:09:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.790 21:09:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.048 21:09:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:25.049 "name": "Existed_Raid", 00:13:25.049 "uuid": "39b84dbe-c791-4d48-b7c8-ce49cccd4d71", 00:13:25.049 "strip_size_kb": 64, 00:13:25.049 "state": "offline", 00:13:25.049 "raid_level": "raid0", 00:13:25.049 "superblock": false, 00:13:25.049 "num_base_bdevs": 2, 00:13:25.049 "num_base_bdevs_discovered": 1, 00:13:25.049 "num_base_bdevs_operational": 1, 00:13:25.049 "base_bdevs_list": [ 00:13:25.049 { 00:13:25.049 "name": null, 00:13:25.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.049 "is_configured": false, 00:13:25.049 "data_offset": 0, 00:13:25.049 "data_size": 65536 00:13:25.049 }, 00:13:25.049 { 00:13:25.049 "name": "BaseBdev2", 00:13:25.049 "uuid": "db95738a-5086-4de8-8568-622e7abb3fc2", 00:13:25.049 "is_configured": true, 00:13:25.049 "data_offset": 0, 00:13:25.049 "data_size": 65536 00:13:25.049 } 00:13:25.049 ] 00:13:25.049 }' 00:13:25.049 21:09:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:25.049 21:09:47 -- common/autotest_common.sh@10 -- # set +x 00:13:25.666 21:09:48 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:25.666 21:09:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:25.666 21:09:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.666 21:09:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:25.925 21:09:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:25.925 21:09:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.925 21:09:48 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:26.183 [2024-06-07 21:09:48.748753] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:26.183 [2024-06-07 21:09:48.748857] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:13:26.183 21:09:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:26.183 21:09:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:26.183 21:09:48 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.183 21:09:48 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:26.441 21:09:49 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:26.441 21:09:49 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:26.441 21:09:49 -- bdev/bdev_raid.sh@287 -- # killprocess 125212 00:13:26.441 21:09:49 -- common/autotest_common.sh@926 -- # '[' -z 125212 ']' 00:13:26.441 21:09:49 -- common/autotest_common.sh@930 -- # kill -0 125212 00:13:26.441 21:09:49 -- common/autotest_common.sh@931 -- # uname 00:13:26.441 21:09:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:26.441 21:09:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125212 00:13:26.441 killing process with pid 125212 00:13:26.441 21:09:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:26.441 21:09:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:26.441 21:09:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125212' 00:13:26.441 21:09:49 -- common/autotest_common.sh@945 -- # kill 125212 00:13:26.441 21:09:49 -- common/autotest_common.sh@950 -- # wait 125212 00:13:26.441 [2024-06-07 21:09:49.052469] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:26.441 [2024-06-07 21:09:49.052611] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:26.700 ************************************ 00:13:26.700 END TEST raid_state_function_test 00:13:26.700 ************************************ 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:26.700 00:13:26.700 real 0m9.375s 00:13:26.700 user 0m17.305s 00:13:26.700 sys 0m1.085s 00:13:26.700 21:09:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.700 21:09:49 -- common/autotest_common.sh@10 -- # set +x 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:13:26.700 21:09:49 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:26.700 21:09:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:26.700 21:09:49 -- common/autotest_common.sh@10 -- # set +x 00:13:26.700 ************************************ 00:13:26.700 START TEST raid_state_function_test_sb 00:13:26.700 ************************************ 00:13:26.700 21:09:49 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:26.700 21:09:49 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:26.701 21:09:49 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:26.701 21:09:49 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:13:26.701 21:09:49 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:26.701 21:09:49 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:26.701 21:09:49 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:26.701 21:09:49 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:26.701 21:09:49 -- bdev/bdev_raid.sh@226 -- # raid_pid=125545 00:13:26.701 21:09:49 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125545' 00:13:26.701 Process raid pid: 125545 00:13:26.701 21:09:49 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125545 /var/tmp/spdk-raid.sock 00:13:26.701 21:09:49 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:26.701 21:09:49 -- common/autotest_common.sh@819 -- # '[' -z 125545 ']' 00:13:26.701 21:09:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:26.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:26.701 21:09:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:26.701 21:09:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:26.701 21:09:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:26.701 21:09:49 -- common/autotest_common.sh@10 -- # set +x 00:13:26.959 [2024-06-07 21:09:49.386640] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:26.959 [2024-06-07 21:09:49.387332] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.959 [2024-06-07 21:09:49.545355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.959 [2024-06-07 21:09:49.615457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.217 [2024-06-07 21:09:49.671290] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.781 21:09:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:27.782 21:09:50 -- common/autotest_common.sh@852 -- # return 0 00:13:27.782 21:09:50 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:28.039 [2024-06-07 21:09:50.469553] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:28.039 [2024-06-07 21:09:50.469627] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:28.039 [2024-06-07 21:09:50.469657] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:28.039 [2024-06-07 21:09:50.469677] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:28.039 21:09:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:28.039 21:09:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:28.039 21:09:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:28.039 21:09:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:28.039 21:09:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:28.039 21:09:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:28.040 21:09:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:28.040 21:09:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:28.040 21:09:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:28.040 21:09:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:28.040 21:09:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.040 21:09:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.297 21:09:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:28.297 "name": "Existed_Raid", 00:13:28.297 "uuid": "40764478-b369-4675-bba1-c9ce0bde15e5", 00:13:28.297 "strip_size_kb": 64, 00:13:28.297 "state": "configuring", 00:13:28.297 "raid_level": "raid0", 00:13:28.297 "superblock": true, 00:13:28.297 "num_base_bdevs": 2, 00:13:28.297 "num_base_bdevs_discovered": 0, 00:13:28.297 "num_base_bdevs_operational": 2, 00:13:28.297 "base_bdevs_list": [ 00:13:28.297 { 00:13:28.297 "name": "BaseBdev1", 00:13:28.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.297 "is_configured": false, 00:13:28.297 "data_offset": 0, 00:13:28.297 "data_size": 0 00:13:28.297 }, 00:13:28.297 { 00:13:28.297 "name": "BaseBdev2", 00:13:28.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.297 "is_configured": false, 00:13:28.297 "data_offset": 0, 00:13:28.297 "data_size": 0 00:13:28.297 } 00:13:28.297 ] 00:13:28.297 }' 00:13:28.297 21:09:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:28.297 21:09:50 -- common/autotest_common.sh@10 -- # set +x 00:13:28.863 21:09:51 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:29.120 [2024-06-07 21:09:51.641680] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:29.120 [2024-06-07 21:09:51.641729] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:13:29.120 21:09:51 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:29.378 [2024-06-07 21:09:51.849750] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:29.378 [2024-06-07 21:09:51.849900] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:29.378 [2024-06-07 21:09:51.849930] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:29.378 [2024-06-07 21:09:51.849954] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:29.378 21:09:51 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:29.636 [2024-06-07 21:09:52.077633] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.636 BaseBdev1 00:13:29.636 21:09:52 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:29.636 21:09:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:29.636 21:09:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:29.636 21:09:52 -- common/autotest_common.sh@889 -- # local i 00:13:29.636 21:09:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:29.636 21:09:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:29.636 21:09:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:29.894 21:09:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:30.152 [ 00:13:30.152 { 00:13:30.152 "name": "BaseBdev1", 00:13:30.152 "aliases": [ 00:13:30.152 "f4e23f12-7af0-4e38-8f74-0c276a7917ab" 00:13:30.152 ], 00:13:30.152 "product_name": "Malloc disk", 00:13:30.152 "block_size": 512, 00:13:30.152 "num_blocks": 65536, 00:13:30.152 "uuid": "f4e23f12-7af0-4e38-8f74-0c276a7917ab", 00:13:30.152 "assigned_rate_limits": { 00:13:30.152 "rw_ios_per_sec": 0, 00:13:30.152 "rw_mbytes_per_sec": 0, 00:13:30.152 "r_mbytes_per_sec": 0, 00:13:30.152 "w_mbytes_per_sec": 0 00:13:30.152 }, 00:13:30.152 "claimed": true, 00:13:30.152 "claim_type": "exclusive_write", 00:13:30.152 "zoned": false, 00:13:30.152 "supported_io_types": { 00:13:30.152 "read": true, 00:13:30.152 "write": true, 00:13:30.152 "unmap": true, 00:13:30.152 "write_zeroes": true, 00:13:30.152 "flush": true, 00:13:30.152 "reset": true, 00:13:30.152 "compare": false, 00:13:30.152 "compare_and_write": false, 00:13:30.152 "abort": true, 00:13:30.152 "nvme_admin": false, 00:13:30.152 "nvme_io": false 00:13:30.152 }, 00:13:30.152 "memory_domains": [ 00:13:30.152 { 00:13:30.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.152 "dma_device_type": 2 00:13:30.152 } 00:13:30.152 ], 00:13:30.152 "driver_specific": {} 00:13:30.152 } 00:13:30.152 ] 00:13:30.152 21:09:52 -- common/autotest_common.sh@895 -- # return 0 00:13:30.152 21:09:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:30.152 21:09:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:30.152 21:09:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:30.152 21:09:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:30.152 21:09:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:30.152 21:09:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:30.152 21:09:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:30.152 21:09:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:30.152 21:09:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:30.152 21:09:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:30.152 21:09:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.152 21:09:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.409 21:09:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:30.409 "name": "Existed_Raid", 00:13:30.409 "uuid": "f3a2c1e8-0c1e-44d6-a4c5-29ef0a5291b3", 00:13:30.409 "strip_size_kb": 64, 00:13:30.409 "state": "configuring", 00:13:30.409 "raid_level": "raid0", 00:13:30.409 "superblock": true, 00:13:30.409 "num_base_bdevs": 2, 00:13:30.409 "num_base_bdevs_discovered": 1, 00:13:30.409 "num_base_bdevs_operational": 2, 00:13:30.409 "base_bdevs_list": [ 00:13:30.409 { 00:13:30.409 "name": "BaseBdev1", 00:13:30.409 "uuid": "f4e23f12-7af0-4e38-8f74-0c276a7917ab", 00:13:30.409 "is_configured": true, 00:13:30.409 "data_offset": 2048, 00:13:30.409 "data_size": 63488 00:13:30.409 }, 00:13:30.409 { 00:13:30.409 "name": "BaseBdev2", 00:13:30.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.409 "is_configured": false, 00:13:30.409 "data_offset": 0, 00:13:30.409 "data_size": 0 00:13:30.409 } 00:13:30.409 ] 00:13:30.409 }' 00:13:30.409 21:09:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:30.409 21:09:52 -- common/autotest_common.sh@10 -- # set +x 00:13:30.973 21:09:53 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:31.231 [2024-06-07 21:09:53.750144] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:31.231 [2024-06-07 21:09:53.750243] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:13:31.231 21:09:53 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:31.231 21:09:53 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:31.490 21:09:53 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:31.748 BaseBdev1 00:13:31.748 21:09:54 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:31.748 21:09:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:31.748 21:09:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:31.748 21:09:54 -- common/autotest_common.sh@889 -- # local i 00:13:31.748 21:09:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:31.748 21:09:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:31.748 21:09:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:32.007 21:09:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:32.007 [ 00:13:32.007 { 00:13:32.007 "name": "BaseBdev1", 00:13:32.007 "aliases": [ 00:13:32.007 "e28efbde-6abb-4726-be88-945ff35fb936" 00:13:32.007 ], 00:13:32.007 "product_name": "Malloc disk", 00:13:32.007 "block_size": 512, 00:13:32.007 "num_blocks": 65536, 00:13:32.007 "uuid": "e28efbde-6abb-4726-be88-945ff35fb936", 00:13:32.007 "assigned_rate_limits": { 00:13:32.007 "rw_ios_per_sec": 0, 00:13:32.007 "rw_mbytes_per_sec": 0, 00:13:32.007 "r_mbytes_per_sec": 0, 00:13:32.007 "w_mbytes_per_sec": 0 00:13:32.007 }, 00:13:32.007 "claimed": false, 00:13:32.007 "zoned": false, 00:13:32.007 "supported_io_types": { 00:13:32.007 "read": true, 00:13:32.007 "write": true, 00:13:32.007 "unmap": true, 00:13:32.007 "write_zeroes": true, 00:13:32.007 "flush": true, 00:13:32.007 "reset": true, 00:13:32.007 "compare": false, 00:13:32.007 "compare_and_write": false, 00:13:32.007 "abort": true, 00:13:32.007 "nvme_admin": false, 00:13:32.007 "nvme_io": false 00:13:32.007 }, 00:13:32.007 "memory_domains": [ 00:13:32.007 { 00:13:32.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.007 "dma_device_type": 2 00:13:32.007 } 00:13:32.007 ], 00:13:32.007 "driver_specific": {} 00:13:32.007 } 00:13:32.007 ] 00:13:32.007 21:09:54 -- common/autotest_common.sh@895 -- # return 0 00:13:32.007 21:09:54 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:32.265 [2024-06-07 21:09:54.852542] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:32.265 [2024-06-07 21:09:54.854827] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:32.265 [2024-06-07 21:09:54.854911] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:32.265 21:09:54 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:32.265 21:09:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:32.265 21:09:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:32.265 21:09:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:32.265 21:09:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:32.265 21:09:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:32.265 21:09:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:32.265 21:09:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:32.265 21:09:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:32.265 21:09:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:32.265 21:09:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:32.265 21:09:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:32.265 21:09:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.266 21:09:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.524 21:09:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:32.524 "name": "Existed_Raid", 00:13:32.524 "uuid": "ebcb206c-36f6-42d5-9396-1f0623c9c251", 00:13:32.524 "strip_size_kb": 64, 00:13:32.524 "state": "configuring", 00:13:32.524 "raid_level": "raid0", 00:13:32.524 "superblock": true, 00:13:32.524 "num_base_bdevs": 2, 00:13:32.524 "num_base_bdevs_discovered": 1, 00:13:32.524 "num_base_bdevs_operational": 2, 00:13:32.524 "base_bdevs_list": [ 00:13:32.524 { 00:13:32.524 "name": "BaseBdev1", 00:13:32.524 "uuid": "e28efbde-6abb-4726-be88-945ff35fb936", 00:13:32.524 "is_configured": true, 00:13:32.524 "data_offset": 2048, 00:13:32.524 "data_size": 63488 00:13:32.524 }, 00:13:32.524 { 00:13:32.524 "name": "BaseBdev2", 00:13:32.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.524 "is_configured": false, 00:13:32.524 "data_offset": 0, 00:13:32.524 "data_size": 0 00:13:32.524 } 00:13:32.524 ] 00:13:32.524 }' 00:13:32.524 21:09:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:32.524 21:09:55 -- common/autotest_common.sh@10 -- # set +x 00:13:33.458 21:09:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:33.458 [2024-06-07 21:09:56.073115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.458 [2024-06-07 21:09:56.073492] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:13:33.458 [2024-06-07 21:09:56.073514] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:33.458 [2024-06-07 21:09:56.073737] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:13:33.458 BaseBdev2 00:13:33.458 [2024-06-07 21:09:56.074299] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:13:33.458 [2024-06-07 21:09:56.074342] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:13:33.458 [2024-06-07 21:09:56.074562] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.458 21:09:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:33.458 21:09:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:33.458 21:09:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:33.458 21:09:56 -- common/autotest_common.sh@889 -- # local i 00:13:33.458 21:09:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:33.458 21:09:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:33.458 21:09:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:33.716 21:09:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:33.975 [ 00:13:33.975 { 00:13:33.975 "name": "BaseBdev2", 00:13:33.975 "aliases": [ 00:13:33.975 "60545a1a-9959-46d0-8bd0-bd95d6141700" 00:13:33.975 ], 00:13:33.975 "product_name": "Malloc disk", 00:13:33.975 "block_size": 512, 00:13:33.975 "num_blocks": 65536, 00:13:33.975 "uuid": "60545a1a-9959-46d0-8bd0-bd95d6141700", 00:13:33.975 "assigned_rate_limits": { 00:13:33.975 "rw_ios_per_sec": 0, 00:13:33.975 "rw_mbytes_per_sec": 0, 00:13:33.975 "r_mbytes_per_sec": 0, 00:13:33.975 "w_mbytes_per_sec": 0 00:13:33.975 }, 00:13:33.975 "claimed": true, 00:13:33.975 "claim_type": "exclusive_write", 00:13:33.975 "zoned": false, 00:13:33.975 "supported_io_types": { 00:13:33.975 "read": true, 00:13:33.975 "write": true, 00:13:33.975 "unmap": true, 00:13:33.975 "write_zeroes": true, 00:13:33.975 "flush": true, 00:13:33.975 "reset": true, 00:13:33.975 "compare": false, 00:13:33.975 "compare_and_write": false, 00:13:33.975 "abort": true, 00:13:33.975 "nvme_admin": false, 00:13:33.975 "nvme_io": false 00:13:33.975 }, 00:13:33.975 "memory_domains": [ 00:13:33.975 { 00:13:33.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.975 "dma_device_type": 2 00:13:33.975 } 00:13:33.975 ], 00:13:33.975 "driver_specific": {} 00:13:33.975 } 00:13:33.975 ] 00:13:33.975 21:09:56 -- common/autotest_common.sh@895 -- # return 0 00:13:33.975 21:09:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:33.975 21:09:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:33.975 21:09:56 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:33.975 21:09:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:33.975 21:09:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:33.975 21:09:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:33.975 21:09:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:33.975 21:09:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:33.975 21:09:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:33.975 21:09:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:33.975 21:09:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:33.975 21:09:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:33.975 21:09:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.975 21:09:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.233 21:09:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:34.233 "name": "Existed_Raid", 00:13:34.233 "uuid": "ebcb206c-36f6-42d5-9396-1f0623c9c251", 00:13:34.233 "strip_size_kb": 64, 00:13:34.233 "state": "online", 00:13:34.233 "raid_level": "raid0", 00:13:34.233 "superblock": true, 00:13:34.233 "num_base_bdevs": 2, 00:13:34.233 "num_base_bdevs_discovered": 2, 00:13:34.233 "num_base_bdevs_operational": 2, 00:13:34.233 "base_bdevs_list": [ 00:13:34.233 { 00:13:34.233 "name": "BaseBdev1", 00:13:34.233 "uuid": "e28efbde-6abb-4726-be88-945ff35fb936", 00:13:34.233 "is_configured": true, 00:13:34.233 "data_offset": 2048, 00:13:34.233 "data_size": 63488 00:13:34.233 }, 00:13:34.233 { 00:13:34.233 "name": "BaseBdev2", 00:13:34.233 "uuid": "60545a1a-9959-46d0-8bd0-bd95d6141700", 00:13:34.233 "is_configured": true, 00:13:34.233 "data_offset": 2048, 00:13:34.233 "data_size": 63488 00:13:34.233 } 00:13:34.233 ] 00:13:34.233 }' 00:13:34.233 21:09:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:34.233 21:09:56 -- common/autotest_common.sh@10 -- # set +x 00:13:34.800 21:09:57 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:35.058 [2024-06-07 21:09:57.609708] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.058 [2024-06-07 21:09:57.609745] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.058 [2024-06-07 21:09:57.609842] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.058 21:09:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.316 21:09:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:35.316 "name": "Existed_Raid", 00:13:35.316 "uuid": "ebcb206c-36f6-42d5-9396-1f0623c9c251", 00:13:35.316 "strip_size_kb": 64, 00:13:35.316 "state": "offline", 00:13:35.316 "raid_level": "raid0", 00:13:35.317 "superblock": true, 00:13:35.317 "num_base_bdevs": 2, 00:13:35.317 "num_base_bdevs_discovered": 1, 00:13:35.317 "num_base_bdevs_operational": 1, 00:13:35.317 "base_bdevs_list": [ 00:13:35.317 { 00:13:35.317 "name": null, 00:13:35.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.317 "is_configured": false, 00:13:35.317 "data_offset": 2048, 00:13:35.317 "data_size": 63488 00:13:35.317 }, 00:13:35.317 { 00:13:35.317 "name": "BaseBdev2", 00:13:35.317 "uuid": "60545a1a-9959-46d0-8bd0-bd95d6141700", 00:13:35.317 "is_configured": true, 00:13:35.317 "data_offset": 2048, 00:13:35.317 "data_size": 63488 00:13:35.317 } 00:13:35.317 ] 00:13:35.317 }' 00:13:35.317 21:09:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:35.317 21:09:57 -- common/autotest_common.sh@10 -- # set +x 00:13:36.252 21:09:58 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:36.252 21:09:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:36.252 21:09:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.252 21:09:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:36.252 21:09:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:36.252 21:09:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:36.252 21:09:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:36.510 [2024-06-07 21:09:59.131930] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:36.510 [2024-06-07 21:09:59.132047] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:13:36.510 21:09:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:36.510 21:09:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:36.510 21:09:59 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.510 21:09:59 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:36.768 21:09:59 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:36.768 21:09:59 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:36.769 21:09:59 -- bdev/bdev_raid.sh@287 -- # killprocess 125545 00:13:36.769 21:09:59 -- common/autotest_common.sh@926 -- # '[' -z 125545 ']' 00:13:36.769 21:09:59 -- common/autotest_common.sh@930 -- # kill -0 125545 00:13:36.769 21:09:59 -- common/autotest_common.sh@931 -- # uname 00:13:36.769 21:09:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:36.769 21:09:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125545 00:13:36.769 killing process with pid 125545 00:13:36.769 21:09:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:36.769 21:09:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:36.769 21:09:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125545' 00:13:36.769 21:09:59 -- common/autotest_common.sh@945 -- # kill 125545 00:13:36.769 21:09:59 -- common/autotest_common.sh@950 -- # wait 125545 00:13:36.769 [2024-06-07 21:09:59.393454] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:36.769 [2024-06-07 21:09:59.393573] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.027 ************************************ 00:13:37.027 END TEST raid_state_function_test_sb 00:13:37.027 ************************************ 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:37.027 00:13:37.027 real 0m10.303s 00:13:37.027 user 0m18.945s 00:13:37.027 sys 0m1.202s 00:13:37.027 21:09:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.027 21:09:59 -- common/autotest_common.sh@10 -- # set +x 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:13:37.027 21:09:59 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:37.027 21:09:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:37.027 21:09:59 -- common/autotest_common.sh@10 -- # set +x 00:13:37.027 ************************************ 00:13:37.027 START TEST raid_superblock_test 00:13:37.027 ************************************ 00:13:37.027 21:09:59 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@357 -- # raid_pid=125898 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@358 -- # waitforlisten 125898 /var/tmp/spdk-raid.sock 00:13:37.027 21:09:59 -- common/autotest_common.sh@819 -- # '[' -z 125898 ']' 00:13:37.027 21:09:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:37.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:37.027 21:09:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:37.027 21:09:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:37.027 21:09:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:37.027 21:09:59 -- common/autotest_common.sh@10 -- # set +x 00:13:37.027 21:09:59 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:37.286 [2024-06-07 21:09:59.748968] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:37.286 [2024-06-07 21:09:59.749424] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125898 ] 00:13:37.286 [2024-06-07 21:09:59.912004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.607 [2024-06-07 21:09:59.987816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.607 [2024-06-07 21:10:00.042975] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.173 21:10:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:38.173 21:10:00 -- common/autotest_common.sh@852 -- # return 0 00:13:38.173 21:10:00 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:13:38.174 21:10:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:38.174 21:10:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:13:38.174 21:10:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:13:38.174 21:10:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:38.174 21:10:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:38.174 21:10:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:38.174 21:10:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:38.174 21:10:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:38.432 malloc1 00:13:38.432 21:10:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:38.690 [2024-06-07 21:10:01.140302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:38.690 [2024-06-07 21:10:01.140450] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.690 [2024-06-07 21:10:01.140493] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:13:38.690 [2024-06-07 21:10:01.140542] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.690 [2024-06-07 21:10:01.143074] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.690 [2024-06-07 21:10:01.143153] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:38.690 pt1 00:13:38.690 21:10:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:38.690 21:10:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:38.690 21:10:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:13:38.690 21:10:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:13:38.690 21:10:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:38.690 21:10:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:38.690 21:10:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:38.690 21:10:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:38.690 21:10:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:38.947 malloc2 00:13:38.947 21:10:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:38.947 [2024-06-07 21:10:01.620777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:38.947 [2024-06-07 21:10:01.620961] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.947 [2024-06-07 21:10:01.621035] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:13:38.947 [2024-06-07 21:10:01.621102] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.205 [2024-06-07 21:10:01.623984] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.205 [2024-06-07 21:10:01.624086] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:39.205 pt2 00:13:39.205 21:10:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:39.205 21:10:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:39.205 21:10:01 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:13:39.205 [2024-06-07 21:10:01.829084] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:39.205 [2024-06-07 21:10:01.831433] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:39.205 [2024-06-07 21:10:01.831691] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:13:39.205 [2024-06-07 21:10:01.831717] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:39.205 [2024-06-07 21:10:01.831890] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:13:39.205 [2024-06-07 21:10:01.832362] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:13:39.205 [2024-06-07 21:10:01.832385] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:13:39.205 [2024-06-07 21:10:01.832616] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.205 21:10:01 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:39.205 21:10:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:39.205 21:10:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:39.205 21:10:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:39.205 21:10:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:39.205 21:10:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:39.205 21:10:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:39.205 21:10:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:39.205 21:10:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:39.205 21:10:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:39.205 21:10:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.205 21:10:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.463 21:10:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:39.463 "name": "raid_bdev1", 00:13:39.463 "uuid": "bc04eaaf-791d-469a-8c74-676c692e3b0a", 00:13:39.463 "strip_size_kb": 64, 00:13:39.463 "state": "online", 00:13:39.463 "raid_level": "raid0", 00:13:39.463 "superblock": true, 00:13:39.463 "num_base_bdevs": 2, 00:13:39.463 "num_base_bdevs_discovered": 2, 00:13:39.463 "num_base_bdevs_operational": 2, 00:13:39.463 "base_bdevs_list": [ 00:13:39.463 { 00:13:39.463 "name": "pt1", 00:13:39.463 "uuid": "b7bdc4c1-13ed-5962-98a8-570cb42ec868", 00:13:39.463 "is_configured": true, 00:13:39.463 "data_offset": 2048, 00:13:39.463 "data_size": 63488 00:13:39.463 }, 00:13:39.463 { 00:13:39.463 "name": "pt2", 00:13:39.463 "uuid": "f89b1f3e-f40e-5e57-8752-22b58abbf76c", 00:13:39.463 "is_configured": true, 00:13:39.463 "data_offset": 2048, 00:13:39.463 "data_size": 63488 00:13:39.463 } 00:13:39.463 ] 00:13:39.463 }' 00:13:39.463 21:10:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:39.463 21:10:02 -- common/autotest_common.sh@10 -- # set +x 00:13:40.398 21:10:02 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:40.398 21:10:02 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:13:40.398 [2024-06-07 21:10:02.979069] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.398 21:10:02 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=bc04eaaf-791d-469a-8c74-676c692e3b0a 00:13:40.398 21:10:02 -- bdev/bdev_raid.sh@380 -- # '[' -z bc04eaaf-791d-469a-8c74-676c692e3b0a ']' 00:13:40.398 21:10:02 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:40.656 [2024-06-07 21:10:03.238912] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.656 [2024-06-07 21:10:03.238962] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.656 [2024-06-07 21:10:03.239129] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.656 [2024-06-07 21:10:03.239198] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.657 [2024-06-07 21:10:03.239211] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:13:40.657 21:10:03 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.657 21:10:03 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:13:40.914 21:10:03 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:13:40.914 21:10:03 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:13:40.914 21:10:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:40.914 21:10:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:41.172 21:10:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:41.172 21:10:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:41.430 21:10:03 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:41.430 21:10:03 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:41.689 21:10:04 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:13:41.689 21:10:04 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:41.689 21:10:04 -- common/autotest_common.sh@640 -- # local es=0 00:13:41.689 21:10:04 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:41.689 21:10:04 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.689 21:10:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:41.689 21:10:04 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.689 21:10:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:41.689 21:10:04 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.689 21:10:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:41.689 21:10:04 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.689 21:10:04 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:41.689 21:10:04 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:41.948 [2024-06-07 21:10:04.431181] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:41.948 [2024-06-07 21:10:04.433414] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:41.948 [2024-06-07 21:10:04.433505] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:13:41.948 [2024-06-07 21:10:04.433602] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:13:41.948 [2024-06-07 21:10:04.433640] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.948 [2024-06-07 21:10:04.433652] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:13:41.948 request: 00:13:41.948 { 00:13:41.948 "name": "raid_bdev1", 00:13:41.948 "raid_level": "raid0", 00:13:41.948 "base_bdevs": [ 00:13:41.948 "malloc1", 00:13:41.948 "malloc2" 00:13:41.948 ], 00:13:41.948 "superblock": false, 00:13:41.948 "strip_size_kb": 64, 00:13:41.948 "method": "bdev_raid_create", 00:13:41.948 "req_id": 1 00:13:41.948 } 00:13:41.948 Got JSON-RPC error response 00:13:41.948 response: 00:13:41.948 { 00:13:41.948 "code": -17, 00:13:41.948 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:41.948 } 00:13:41.948 21:10:04 -- common/autotest_common.sh@643 -- # es=1 00:13:41.948 21:10:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:41.948 21:10:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:41.948 21:10:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:41.948 21:10:04 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.948 21:10:04 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:13:42.206 21:10:04 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:13:42.206 21:10:04 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:13:42.206 21:10:04 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:42.464 [2024-06-07 21:10:04.883300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:42.464 [2024-06-07 21:10:04.883476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.464 [2024-06-07 21:10:04.883536] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:42.464 [2024-06-07 21:10:04.883565] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.464 [2024-06-07 21:10:04.886241] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.464 [2024-06-07 21:10:04.886342] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:42.464 [2024-06-07 21:10:04.886450] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:42.464 [2024-06-07 21:10:04.886544] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:42.464 pt1 00:13:42.464 21:10:04 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:13:42.464 21:10:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:42.464 21:10:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:42.464 21:10:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:42.464 21:10:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:42.464 21:10:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:42.464 21:10:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:42.464 21:10:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:42.464 21:10:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:42.464 21:10:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:42.464 21:10:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.464 21:10:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.721 21:10:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:42.721 "name": "raid_bdev1", 00:13:42.721 "uuid": "bc04eaaf-791d-469a-8c74-676c692e3b0a", 00:13:42.721 "strip_size_kb": 64, 00:13:42.721 "state": "configuring", 00:13:42.721 "raid_level": "raid0", 00:13:42.721 "superblock": true, 00:13:42.721 "num_base_bdevs": 2, 00:13:42.721 "num_base_bdevs_discovered": 1, 00:13:42.721 "num_base_bdevs_operational": 2, 00:13:42.721 "base_bdevs_list": [ 00:13:42.721 { 00:13:42.721 "name": "pt1", 00:13:42.721 "uuid": "b7bdc4c1-13ed-5962-98a8-570cb42ec868", 00:13:42.721 "is_configured": true, 00:13:42.721 "data_offset": 2048, 00:13:42.721 "data_size": 63488 00:13:42.721 }, 00:13:42.721 { 00:13:42.721 "name": null, 00:13:42.721 "uuid": "f89b1f3e-f40e-5e57-8752-22b58abbf76c", 00:13:42.721 "is_configured": false, 00:13:42.721 "data_offset": 2048, 00:13:42.721 "data_size": 63488 00:13:42.721 } 00:13:42.721 ] 00:13:42.721 }' 00:13:42.721 21:10:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:42.721 21:10:05 -- common/autotest_common.sh@10 -- # set +x 00:13:43.288 21:10:05 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:13:43.288 21:10:05 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:13:43.288 21:10:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:43.288 21:10:05 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:43.546 [2024-06-07 21:10:06.079591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:43.546 [2024-06-07 21:10:06.079734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.546 [2024-06-07 21:10:06.079778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:43.546 [2024-06-07 21:10:06.079806] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.546 [2024-06-07 21:10:06.080363] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.546 [2024-06-07 21:10:06.080466] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:43.546 [2024-06-07 21:10:06.080618] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:43.546 [2024-06-07 21:10:06.080658] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:43.546 [2024-06-07 21:10:06.080782] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:13:43.546 [2024-06-07 21:10:06.080806] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:43.546 [2024-06-07 21:10:06.080951] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:13:43.546 [2024-06-07 21:10:06.081285] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:13:43.546 [2024-06-07 21:10:06.081324] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:13:43.546 [2024-06-07 21:10:06.081436] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.546 pt2 00:13:43.546 21:10:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:43.546 21:10:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:43.546 21:10:06 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:43.546 21:10:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:43.546 21:10:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:43.546 21:10:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:43.546 21:10:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:43.546 21:10:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:43.546 21:10:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:43.546 21:10:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:43.546 21:10:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:43.546 21:10:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:43.546 21:10:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.546 21:10:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.804 21:10:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:43.804 "name": "raid_bdev1", 00:13:43.804 "uuid": "bc04eaaf-791d-469a-8c74-676c692e3b0a", 00:13:43.804 "strip_size_kb": 64, 00:13:43.804 "state": "online", 00:13:43.804 "raid_level": "raid0", 00:13:43.804 "superblock": true, 00:13:43.804 "num_base_bdevs": 2, 00:13:43.804 "num_base_bdevs_discovered": 2, 00:13:43.804 "num_base_bdevs_operational": 2, 00:13:43.804 "base_bdevs_list": [ 00:13:43.804 { 00:13:43.804 "name": "pt1", 00:13:43.804 "uuid": "b7bdc4c1-13ed-5962-98a8-570cb42ec868", 00:13:43.804 "is_configured": true, 00:13:43.804 "data_offset": 2048, 00:13:43.804 "data_size": 63488 00:13:43.804 }, 00:13:43.804 { 00:13:43.804 "name": "pt2", 00:13:43.804 "uuid": "f89b1f3e-f40e-5e57-8752-22b58abbf76c", 00:13:43.804 "is_configured": true, 00:13:43.804 "data_offset": 2048, 00:13:43.804 "data_size": 63488 00:13:43.804 } 00:13:43.804 ] 00:13:43.804 }' 00:13:43.804 21:10:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:43.804 21:10:06 -- common/autotest_common.sh@10 -- # set +x 00:13:44.371 21:10:06 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:44.371 21:10:06 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:13:44.629 [2024-06-07 21:10:07.272062] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.629 21:10:07 -- bdev/bdev_raid.sh@430 -- # '[' bc04eaaf-791d-469a-8c74-676c692e3b0a '!=' bc04eaaf-791d-469a-8c74-676c692e3b0a ']' 00:13:44.629 21:10:07 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:13:44.629 21:10:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:44.629 21:10:07 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:44.629 21:10:07 -- bdev/bdev_raid.sh@511 -- # killprocess 125898 00:13:44.629 21:10:07 -- common/autotest_common.sh@926 -- # '[' -z 125898 ']' 00:13:44.629 21:10:07 -- common/autotest_common.sh@930 -- # kill -0 125898 00:13:44.629 21:10:07 -- common/autotest_common.sh@931 -- # uname 00:13:44.629 21:10:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:44.629 21:10:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125898 00:13:44.887 killing process with pid 125898 00:13:44.887 21:10:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:44.887 21:10:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:44.887 21:10:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125898' 00:13:44.887 21:10:07 -- common/autotest_common.sh@945 -- # kill 125898 00:13:44.887 21:10:07 -- common/autotest_common.sh@950 -- # wait 125898 00:13:44.887 [2024-06-07 21:10:07.312341] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.887 [2024-06-07 21:10:07.312486] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.887 [2024-06-07 21:10:07.312556] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.887 [2024-06-07 21:10:07.312569] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:13:44.887 [2024-06-07 21:10:07.333170] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:45.146 ************************************ 00:13:45.146 END TEST raid_superblock_test 00:13:45.146 ************************************ 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@513 -- # return 0 00:13:45.146 00:13:45.146 real 0m7.892s 00:13:45.146 user 0m14.432s 00:13:45.146 sys 0m0.969s 00:13:45.146 21:10:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.146 21:10:07 -- common/autotest_common.sh@10 -- # set +x 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:13:45.146 21:10:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:45.146 21:10:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:45.146 21:10:07 -- common/autotest_common.sh@10 -- # set +x 00:13:45.146 ************************************ 00:13:45.146 START TEST raid_state_function_test 00:13:45.146 ************************************ 00:13:45.146 21:10:07 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@226 -- # raid_pid=126134 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126134' 00:13:45.146 Process raid pid: 126134 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126134 /var/tmp/spdk-raid.sock 00:13:45.146 21:10:07 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:45.146 21:10:07 -- common/autotest_common.sh@819 -- # '[' -z 126134 ']' 00:13:45.146 21:10:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:45.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:45.146 21:10:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:45.146 21:10:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:45.146 21:10:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:45.146 21:10:07 -- common/autotest_common.sh@10 -- # set +x 00:13:45.146 [2024-06-07 21:10:07.707176] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:45.146 [2024-06-07 21:10:07.707400] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.407 [2024-06-07 21:10:07.877871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.407 [2024-06-07 21:10:07.969928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.407 [2024-06-07 21:10:08.026356] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.978 21:10:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:45.978 21:10:08 -- common/autotest_common.sh@852 -- # return 0 00:13:45.978 21:10:08 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:46.236 [2024-06-07 21:10:08.799881] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:46.236 [2024-06-07 21:10:08.799977] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:46.236 [2024-06-07 21:10:08.799992] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.236 [2024-06-07 21:10:08.800010] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.236 21:10:08 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:46.236 21:10:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:46.237 21:10:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:46.237 21:10:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:46.237 21:10:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:46.237 21:10:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:46.237 21:10:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:46.237 21:10:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:46.237 21:10:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:46.237 21:10:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:46.237 21:10:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.237 21:10:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.495 21:10:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:46.495 "name": "Existed_Raid", 00:13:46.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.495 "strip_size_kb": 64, 00:13:46.495 "state": "configuring", 00:13:46.495 "raid_level": "concat", 00:13:46.495 "superblock": false, 00:13:46.495 "num_base_bdevs": 2, 00:13:46.495 "num_base_bdevs_discovered": 0, 00:13:46.495 "num_base_bdevs_operational": 2, 00:13:46.495 "base_bdevs_list": [ 00:13:46.495 { 00:13:46.495 "name": "BaseBdev1", 00:13:46.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.495 "is_configured": false, 00:13:46.495 "data_offset": 0, 00:13:46.495 "data_size": 0 00:13:46.495 }, 00:13:46.495 { 00:13:46.495 "name": "BaseBdev2", 00:13:46.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.495 "is_configured": false, 00:13:46.495 "data_offset": 0, 00:13:46.495 "data_size": 0 00:13:46.495 } 00:13:46.495 ] 00:13:46.495 }' 00:13:46.495 21:10:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:46.495 21:10:09 -- common/autotest_common.sh@10 -- # set +x 00:13:47.062 21:10:09 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:47.320 [2024-06-07 21:10:09.927972] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:47.320 [2024-06-07 21:10:09.928042] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:13:47.320 21:10:09 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:47.578 [2024-06-07 21:10:10.216040] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:47.578 [2024-06-07 21:10:10.216148] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:47.578 [2024-06-07 21:10:10.216163] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:47.578 [2024-06-07 21:10:10.216192] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:47.578 21:10:10 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:47.837 [2024-06-07 21:10:10.448171] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.837 BaseBdev1 00:13:47.837 21:10:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:47.837 21:10:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:47.837 21:10:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:47.837 21:10:10 -- common/autotest_common.sh@889 -- # local i 00:13:47.837 21:10:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:47.837 21:10:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:47.837 21:10:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:48.094 21:10:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:48.351 [ 00:13:48.351 { 00:13:48.351 "name": "BaseBdev1", 00:13:48.351 "aliases": [ 00:13:48.352 "7492b2a5-222d-45e0-a4d3-e067c1b3fc46" 00:13:48.352 ], 00:13:48.352 "product_name": "Malloc disk", 00:13:48.352 "block_size": 512, 00:13:48.352 "num_blocks": 65536, 00:13:48.352 "uuid": "7492b2a5-222d-45e0-a4d3-e067c1b3fc46", 00:13:48.352 "assigned_rate_limits": { 00:13:48.352 "rw_ios_per_sec": 0, 00:13:48.352 "rw_mbytes_per_sec": 0, 00:13:48.352 "r_mbytes_per_sec": 0, 00:13:48.352 "w_mbytes_per_sec": 0 00:13:48.352 }, 00:13:48.352 "claimed": true, 00:13:48.352 "claim_type": "exclusive_write", 00:13:48.352 "zoned": false, 00:13:48.352 "supported_io_types": { 00:13:48.352 "read": true, 00:13:48.352 "write": true, 00:13:48.352 "unmap": true, 00:13:48.352 "write_zeroes": true, 00:13:48.352 "flush": true, 00:13:48.352 "reset": true, 00:13:48.352 "compare": false, 00:13:48.352 "compare_and_write": false, 00:13:48.352 "abort": true, 00:13:48.352 "nvme_admin": false, 00:13:48.352 "nvme_io": false 00:13:48.352 }, 00:13:48.352 "memory_domains": [ 00:13:48.352 { 00:13:48.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.352 "dma_device_type": 2 00:13:48.352 } 00:13:48.352 ], 00:13:48.352 "driver_specific": {} 00:13:48.352 } 00:13:48.352 ] 00:13:48.352 21:10:10 -- common/autotest_common.sh@895 -- # return 0 00:13:48.352 21:10:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:48.352 21:10:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:48.352 21:10:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:48.352 21:10:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:48.352 21:10:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:48.352 21:10:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:48.352 21:10:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:48.352 21:10:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:48.352 21:10:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:48.352 21:10:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:48.352 21:10:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.352 21:10:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.610 21:10:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:48.610 "name": "Existed_Raid", 00:13:48.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.610 "strip_size_kb": 64, 00:13:48.610 "state": "configuring", 00:13:48.610 "raid_level": "concat", 00:13:48.610 "superblock": false, 00:13:48.610 "num_base_bdevs": 2, 00:13:48.610 "num_base_bdevs_discovered": 1, 00:13:48.610 "num_base_bdevs_operational": 2, 00:13:48.610 "base_bdevs_list": [ 00:13:48.610 { 00:13:48.610 "name": "BaseBdev1", 00:13:48.610 "uuid": "7492b2a5-222d-45e0-a4d3-e067c1b3fc46", 00:13:48.610 "is_configured": true, 00:13:48.610 "data_offset": 0, 00:13:48.610 "data_size": 65536 00:13:48.610 }, 00:13:48.610 { 00:13:48.610 "name": "BaseBdev2", 00:13:48.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.610 "is_configured": false, 00:13:48.610 "data_offset": 0, 00:13:48.610 "data_size": 0 00:13:48.610 } 00:13:48.610 ] 00:13:48.610 }' 00:13:48.610 21:10:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:48.610 21:10:11 -- common/autotest_common.sh@10 -- # set +x 00:13:49.175 21:10:11 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:49.460 [2024-06-07 21:10:11.976601] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:49.460 [2024-06-07 21:10:11.976698] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:13:49.460 21:10:11 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:49.460 21:10:11 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:49.740 [2024-06-07 21:10:12.272727] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.740 [2024-06-07 21:10:12.274908] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:49.740 [2024-06-07 21:10:12.274994] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:49.740 21:10:12 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:49.740 21:10:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:49.740 21:10:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:49.740 21:10:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:49.740 21:10:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:49.740 21:10:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:49.740 21:10:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:49.740 21:10:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:49.740 21:10:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:49.740 21:10:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:49.740 21:10:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:49.740 21:10:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:49.740 21:10:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.740 21:10:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.998 21:10:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:49.998 "name": "Existed_Raid", 00:13:49.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.998 "strip_size_kb": 64, 00:13:49.998 "state": "configuring", 00:13:49.999 "raid_level": "concat", 00:13:49.999 "superblock": false, 00:13:49.999 "num_base_bdevs": 2, 00:13:49.999 "num_base_bdevs_discovered": 1, 00:13:49.999 "num_base_bdevs_operational": 2, 00:13:49.999 "base_bdevs_list": [ 00:13:49.999 { 00:13:49.999 "name": "BaseBdev1", 00:13:49.999 "uuid": "7492b2a5-222d-45e0-a4d3-e067c1b3fc46", 00:13:49.999 "is_configured": true, 00:13:49.999 "data_offset": 0, 00:13:49.999 "data_size": 65536 00:13:49.999 }, 00:13:49.999 { 00:13:49.999 "name": "BaseBdev2", 00:13:49.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.999 "is_configured": false, 00:13:49.999 "data_offset": 0, 00:13:49.999 "data_size": 0 00:13:49.999 } 00:13:49.999 ] 00:13:49.999 }' 00:13:49.999 21:10:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:49.999 21:10:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.565 21:10:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:50.825 [2024-06-07 21:10:13.397053] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.825 [2024-06-07 21:10:13.397143] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:13:50.825 [2024-06-07 21:10:13.397171] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:50.825 [2024-06-07 21:10:13.397363] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:13:50.825 [2024-06-07 21:10:13.397908] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:13:50.825 [2024-06-07 21:10:13.397926] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:13:50.825 [2024-06-07 21:10:13.398235] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.825 BaseBdev2 00:13:50.825 21:10:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:50.825 21:10:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:50.825 21:10:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:50.825 21:10:13 -- common/autotest_common.sh@889 -- # local i 00:13:50.825 21:10:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:50.825 21:10:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:50.825 21:10:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:51.082 21:10:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:51.339 [ 00:13:51.339 { 00:13:51.339 "name": "BaseBdev2", 00:13:51.339 "aliases": [ 00:13:51.339 "03be4227-87e4-4077-93b2-0ba3ccdbb706" 00:13:51.339 ], 00:13:51.339 "product_name": "Malloc disk", 00:13:51.339 "block_size": 512, 00:13:51.339 "num_blocks": 65536, 00:13:51.339 "uuid": "03be4227-87e4-4077-93b2-0ba3ccdbb706", 00:13:51.339 "assigned_rate_limits": { 00:13:51.339 "rw_ios_per_sec": 0, 00:13:51.339 "rw_mbytes_per_sec": 0, 00:13:51.339 "r_mbytes_per_sec": 0, 00:13:51.339 "w_mbytes_per_sec": 0 00:13:51.339 }, 00:13:51.339 "claimed": true, 00:13:51.339 "claim_type": "exclusive_write", 00:13:51.339 "zoned": false, 00:13:51.339 "supported_io_types": { 00:13:51.339 "read": true, 00:13:51.339 "write": true, 00:13:51.339 "unmap": true, 00:13:51.339 "write_zeroes": true, 00:13:51.339 "flush": true, 00:13:51.339 "reset": true, 00:13:51.339 "compare": false, 00:13:51.339 "compare_and_write": false, 00:13:51.339 "abort": true, 00:13:51.339 "nvme_admin": false, 00:13:51.339 "nvme_io": false 00:13:51.339 }, 00:13:51.339 "memory_domains": [ 00:13:51.339 { 00:13:51.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.339 "dma_device_type": 2 00:13:51.339 } 00:13:51.339 ], 00:13:51.339 "driver_specific": {} 00:13:51.339 } 00:13:51.339 ] 00:13:51.339 21:10:13 -- common/autotest_common.sh@895 -- # return 0 00:13:51.339 21:10:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:51.339 21:10:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:51.339 21:10:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:51.339 21:10:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:51.339 21:10:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:51.339 21:10:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:51.339 21:10:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:51.339 21:10:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:51.339 21:10:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:51.339 21:10:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:51.339 21:10:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:51.339 21:10:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:51.339 21:10:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.339 21:10:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.597 21:10:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:51.597 "name": "Existed_Raid", 00:13:51.597 "uuid": "197750ed-bf0a-4cb5-a800-ed8be7737297", 00:13:51.597 "strip_size_kb": 64, 00:13:51.597 "state": "online", 00:13:51.597 "raid_level": "concat", 00:13:51.597 "superblock": false, 00:13:51.597 "num_base_bdevs": 2, 00:13:51.597 "num_base_bdevs_discovered": 2, 00:13:51.597 "num_base_bdevs_operational": 2, 00:13:51.597 "base_bdevs_list": [ 00:13:51.597 { 00:13:51.597 "name": "BaseBdev1", 00:13:51.597 "uuid": "7492b2a5-222d-45e0-a4d3-e067c1b3fc46", 00:13:51.597 "is_configured": true, 00:13:51.597 "data_offset": 0, 00:13:51.597 "data_size": 65536 00:13:51.597 }, 00:13:51.597 { 00:13:51.597 "name": "BaseBdev2", 00:13:51.597 "uuid": "03be4227-87e4-4077-93b2-0ba3ccdbb706", 00:13:51.597 "is_configured": true, 00:13:51.597 "data_offset": 0, 00:13:51.597 "data_size": 65536 00:13:51.597 } 00:13:51.597 ] 00:13:51.597 }' 00:13:51.597 21:10:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:51.597 21:10:14 -- common/autotest_common.sh@10 -- # set +x 00:13:52.164 21:10:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:52.423 [2024-06-07 21:10:15.021421] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:52.423 [2024-06-07 21:10:15.021461] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:52.423 [2024-06-07 21:10:15.021609] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.423 21:10:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.681 21:10:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:52.681 "name": "Existed_Raid", 00:13:52.681 "uuid": "197750ed-bf0a-4cb5-a800-ed8be7737297", 00:13:52.681 "strip_size_kb": 64, 00:13:52.681 "state": "offline", 00:13:52.681 "raid_level": "concat", 00:13:52.681 "superblock": false, 00:13:52.681 "num_base_bdevs": 2, 00:13:52.681 "num_base_bdevs_discovered": 1, 00:13:52.681 "num_base_bdevs_operational": 1, 00:13:52.681 "base_bdevs_list": [ 00:13:52.681 { 00:13:52.681 "name": null, 00:13:52.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.681 "is_configured": false, 00:13:52.681 "data_offset": 0, 00:13:52.681 "data_size": 65536 00:13:52.681 }, 00:13:52.681 { 00:13:52.681 "name": "BaseBdev2", 00:13:52.681 "uuid": "03be4227-87e4-4077-93b2-0ba3ccdbb706", 00:13:52.681 "is_configured": true, 00:13:52.681 "data_offset": 0, 00:13:52.681 "data_size": 65536 00:13:52.681 } 00:13:52.681 ] 00:13:52.681 }' 00:13:52.681 21:10:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:52.681 21:10:15 -- common/autotest_common.sh@10 -- # set +x 00:13:53.616 21:10:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:53.616 21:10:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:53.616 21:10:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.616 21:10:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:53.616 21:10:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:53.616 21:10:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:53.616 21:10:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:53.874 [2024-06-07 21:10:16.469543] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:53.875 [2024-06-07 21:10:16.469646] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:13:53.875 21:10:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:53.875 21:10:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:53.875 21:10:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.875 21:10:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:54.132 21:10:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:54.132 21:10:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:54.132 21:10:16 -- bdev/bdev_raid.sh@287 -- # killprocess 126134 00:13:54.132 21:10:16 -- common/autotest_common.sh@926 -- # '[' -z 126134 ']' 00:13:54.132 21:10:16 -- common/autotest_common.sh@930 -- # kill -0 126134 00:13:54.132 21:10:16 -- common/autotest_common.sh@931 -- # uname 00:13:54.132 21:10:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:54.132 21:10:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126134 00:13:54.132 killing process with pid 126134 00:13:54.132 21:10:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:54.132 21:10:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:54.132 21:10:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126134' 00:13:54.132 21:10:16 -- common/autotest_common.sh@945 -- # kill 126134 00:13:54.132 21:10:16 -- common/autotest_common.sh@950 -- # wait 126134 00:13:54.132 [2024-06-07 21:10:16.743684] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:54.132 [2024-06-07 21:10:16.743783] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.390 ************************************ 00:13:54.390 END TEST raid_state_function_test 00:13:54.390 ************************************ 00:13:54.390 21:10:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:54.390 00:13:54.390 real 0m9.339s 00:13:54.390 user 0m17.184s 00:13:54.390 sys 0m1.058s 00:13:54.390 21:10:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:54.390 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:13:54.390 21:10:17 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:54.390 21:10:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:54.390 21:10:17 -- common/autotest_common.sh@10 -- # set +x 00:13:54.390 ************************************ 00:13:54.390 START TEST raid_state_function_test_sb 00:13:54.390 ************************************ 00:13:54.390 21:10:17 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@226 -- # raid_pid=126470 00:13:54.390 Process raid pid: 126470 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126470' 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:54.390 21:10:17 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126470 /var/tmp/spdk-raid.sock 00:13:54.390 21:10:17 -- common/autotest_common.sh@819 -- # '[' -z 126470 ']' 00:13:54.390 21:10:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:54.390 21:10:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:54.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:54.390 21:10:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:54.390 21:10:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:54.390 21:10:17 -- common/autotest_common.sh@10 -- # set +x 00:13:54.648 [2024-06-07 21:10:17.101512] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:54.648 [2024-06-07 21:10:17.101743] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.648 [2024-06-07 21:10:17.267610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.906 [2024-06-07 21:10:17.347516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.906 [2024-06-07 21:10:17.406185] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.529 21:10:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:55.529 21:10:18 -- common/autotest_common.sh@852 -- # return 0 00:13:55.529 21:10:18 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:55.787 [2024-06-07 21:10:18.260549] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.787 [2024-06-07 21:10:18.260629] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.787 [2024-06-07 21:10:18.260659] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.787 [2024-06-07 21:10:18.260677] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.787 21:10:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:55.787 21:10:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:55.787 21:10:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:55.787 21:10:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:55.787 21:10:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:55.787 21:10:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:55.787 21:10:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:55.787 21:10:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:55.787 21:10:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:55.787 21:10:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:55.787 21:10:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.787 21:10:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.046 21:10:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:56.046 "name": "Existed_Raid", 00:13:56.046 "uuid": "f97ab9a1-acd8-4fda-bd8f-e0b7eca86da5", 00:13:56.046 "strip_size_kb": 64, 00:13:56.046 "state": "configuring", 00:13:56.046 "raid_level": "concat", 00:13:56.046 "superblock": true, 00:13:56.046 "num_base_bdevs": 2, 00:13:56.046 "num_base_bdevs_discovered": 0, 00:13:56.046 "num_base_bdevs_operational": 2, 00:13:56.046 "base_bdevs_list": [ 00:13:56.046 { 00:13:56.046 "name": "BaseBdev1", 00:13:56.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.046 "is_configured": false, 00:13:56.046 "data_offset": 0, 00:13:56.046 "data_size": 0 00:13:56.046 }, 00:13:56.046 { 00:13:56.046 "name": "BaseBdev2", 00:13:56.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.046 "is_configured": false, 00:13:56.046 "data_offset": 0, 00:13:56.046 "data_size": 0 00:13:56.046 } 00:13:56.046 ] 00:13:56.046 }' 00:13:56.046 21:10:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:56.046 21:10:18 -- common/autotest_common.sh@10 -- # set +x 00:13:56.612 21:10:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:56.869 [2024-06-07 21:10:19.496669] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:56.869 [2024-06-07 21:10:19.496734] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:13:56.869 21:10:19 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:57.127 [2024-06-07 21:10:19.692780] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:57.127 [2024-06-07 21:10:19.692926] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:57.127 [2024-06-07 21:10:19.692940] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:57.127 [2024-06-07 21:10:19.692965] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:57.127 21:10:19 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:57.385 [2024-06-07 21:10:19.900399] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.385 BaseBdev1 00:13:57.385 21:10:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:57.385 21:10:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:57.385 21:10:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:57.385 21:10:19 -- common/autotest_common.sh@889 -- # local i 00:13:57.385 21:10:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:57.385 21:10:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:57.385 21:10:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:57.643 21:10:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:57.900 [ 00:13:57.901 { 00:13:57.901 "name": "BaseBdev1", 00:13:57.901 "aliases": [ 00:13:57.901 "4c85bde5-37d8-41a5-86bb-84f990e66ba6" 00:13:57.901 ], 00:13:57.901 "product_name": "Malloc disk", 00:13:57.901 "block_size": 512, 00:13:57.901 "num_blocks": 65536, 00:13:57.901 "uuid": "4c85bde5-37d8-41a5-86bb-84f990e66ba6", 00:13:57.901 "assigned_rate_limits": { 00:13:57.901 "rw_ios_per_sec": 0, 00:13:57.901 "rw_mbytes_per_sec": 0, 00:13:57.901 "r_mbytes_per_sec": 0, 00:13:57.901 "w_mbytes_per_sec": 0 00:13:57.901 }, 00:13:57.901 "claimed": true, 00:13:57.901 "claim_type": "exclusive_write", 00:13:57.901 "zoned": false, 00:13:57.901 "supported_io_types": { 00:13:57.901 "read": true, 00:13:57.901 "write": true, 00:13:57.901 "unmap": true, 00:13:57.901 "write_zeroes": true, 00:13:57.901 "flush": true, 00:13:57.901 "reset": true, 00:13:57.901 "compare": false, 00:13:57.901 "compare_and_write": false, 00:13:57.901 "abort": true, 00:13:57.901 "nvme_admin": false, 00:13:57.901 "nvme_io": false 00:13:57.901 }, 00:13:57.901 "memory_domains": [ 00:13:57.901 { 00:13:57.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.901 "dma_device_type": 2 00:13:57.901 } 00:13:57.901 ], 00:13:57.901 "driver_specific": {} 00:13:57.901 } 00:13:57.901 ] 00:13:57.901 21:10:20 -- common/autotest_common.sh@895 -- # return 0 00:13:57.901 21:10:20 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:57.901 21:10:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:57.901 21:10:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:57.901 21:10:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:57.901 21:10:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:57.901 21:10:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:57.901 21:10:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:57.901 21:10:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:57.901 21:10:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:57.901 21:10:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:57.901 21:10:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.901 21:10:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.901 21:10:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:57.901 "name": "Existed_Raid", 00:13:57.901 "uuid": "25ff8d92-c389-40bf-855d-82d5017fece2", 00:13:57.901 "strip_size_kb": 64, 00:13:57.901 "state": "configuring", 00:13:57.901 "raid_level": "concat", 00:13:57.901 "superblock": true, 00:13:57.901 "num_base_bdevs": 2, 00:13:57.901 "num_base_bdevs_discovered": 1, 00:13:57.901 "num_base_bdevs_operational": 2, 00:13:57.901 "base_bdevs_list": [ 00:13:57.901 { 00:13:57.901 "name": "BaseBdev1", 00:13:57.901 "uuid": "4c85bde5-37d8-41a5-86bb-84f990e66ba6", 00:13:57.901 "is_configured": true, 00:13:57.901 "data_offset": 2048, 00:13:57.901 "data_size": 63488 00:13:57.901 }, 00:13:57.901 { 00:13:57.901 "name": "BaseBdev2", 00:13:57.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.901 "is_configured": false, 00:13:57.901 "data_offset": 0, 00:13:57.901 "data_size": 0 00:13:57.901 } 00:13:57.901 ] 00:13:57.901 }' 00:13:57.901 21:10:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:57.901 21:10:20 -- common/autotest_common.sh@10 -- # set +x 00:13:58.834 21:10:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:58.834 [2024-06-07 21:10:21.408786] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:58.834 [2024-06-07 21:10:21.408886] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:13:58.834 21:10:21 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:58.834 21:10:21 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:59.092 21:10:21 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:59.349 BaseBdev1 00:13:59.350 21:10:21 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:59.350 21:10:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:59.350 21:10:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:59.350 21:10:21 -- common/autotest_common.sh@889 -- # local i 00:13:59.350 21:10:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:59.350 21:10:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:59.350 21:10:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:59.607 21:10:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:59.864 [ 00:13:59.864 { 00:13:59.864 "name": "BaseBdev1", 00:13:59.864 "aliases": [ 00:13:59.864 "7d499f03-2083-46aa-91af-038dc470aaf8" 00:13:59.864 ], 00:13:59.864 "product_name": "Malloc disk", 00:13:59.864 "block_size": 512, 00:13:59.864 "num_blocks": 65536, 00:13:59.864 "uuid": "7d499f03-2083-46aa-91af-038dc470aaf8", 00:13:59.864 "assigned_rate_limits": { 00:13:59.864 "rw_ios_per_sec": 0, 00:13:59.864 "rw_mbytes_per_sec": 0, 00:13:59.864 "r_mbytes_per_sec": 0, 00:13:59.864 "w_mbytes_per_sec": 0 00:13:59.864 }, 00:13:59.864 "claimed": false, 00:13:59.864 "zoned": false, 00:13:59.864 "supported_io_types": { 00:13:59.864 "read": true, 00:13:59.865 "write": true, 00:13:59.865 "unmap": true, 00:13:59.865 "write_zeroes": true, 00:13:59.865 "flush": true, 00:13:59.865 "reset": true, 00:13:59.865 "compare": false, 00:13:59.865 "compare_and_write": false, 00:13:59.865 "abort": true, 00:13:59.865 "nvme_admin": false, 00:13:59.865 "nvme_io": false 00:13:59.865 }, 00:13:59.865 "memory_domains": [ 00:13:59.865 { 00:13:59.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.865 "dma_device_type": 2 00:13:59.865 } 00:13:59.865 ], 00:13:59.865 "driver_specific": {} 00:13:59.865 } 00:13:59.865 ] 00:13:59.865 21:10:22 -- common/autotest_common.sh@895 -- # return 0 00:13:59.865 21:10:22 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:59.865 [2024-06-07 21:10:22.494712] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.865 [2024-06-07 21:10:22.496630] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:59.865 [2024-06-07 21:10:22.496707] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:59.865 21:10:22 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:59.865 21:10:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:59.865 21:10:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:59.865 21:10:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:59.865 21:10:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:59.865 21:10:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:59.865 21:10:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:59.865 21:10:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:59.865 21:10:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:59.865 21:10:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:59.865 21:10:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:59.865 21:10:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:59.865 21:10:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.865 21:10:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.122 21:10:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:00.122 "name": "Existed_Raid", 00:14:00.122 "uuid": "3f9dd86f-d419-4e8b-927b-0c1992c053fd", 00:14:00.122 "strip_size_kb": 64, 00:14:00.122 "state": "configuring", 00:14:00.122 "raid_level": "concat", 00:14:00.122 "superblock": true, 00:14:00.122 "num_base_bdevs": 2, 00:14:00.122 "num_base_bdevs_discovered": 1, 00:14:00.122 "num_base_bdevs_operational": 2, 00:14:00.122 "base_bdevs_list": [ 00:14:00.122 { 00:14:00.122 "name": "BaseBdev1", 00:14:00.122 "uuid": "7d499f03-2083-46aa-91af-038dc470aaf8", 00:14:00.122 "is_configured": true, 00:14:00.122 "data_offset": 2048, 00:14:00.122 "data_size": 63488 00:14:00.122 }, 00:14:00.122 { 00:14:00.122 "name": "BaseBdev2", 00:14:00.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.122 "is_configured": false, 00:14:00.122 "data_offset": 0, 00:14:00.122 "data_size": 0 00:14:00.122 } 00:14:00.122 ] 00:14:00.122 }' 00:14:00.122 21:10:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:00.122 21:10:22 -- common/autotest_common.sh@10 -- # set +x 00:14:00.693 21:10:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:00.957 [2024-06-07 21:10:23.556924] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.957 [2024-06-07 21:10:23.557229] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:14:00.957 [2024-06-07 21:10:23.557248] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:00.957 BaseBdev2 00:14:00.957 [2024-06-07 21:10:23.557474] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:00.957 [2024-06-07 21:10:23.558002] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:14:00.957 [2024-06-07 21:10:23.558031] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:14:00.957 [2024-06-07 21:10:23.558275] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.957 21:10:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:00.957 21:10:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:00.957 21:10:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:00.957 21:10:23 -- common/autotest_common.sh@889 -- # local i 00:14:00.957 21:10:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:00.957 21:10:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:00.957 21:10:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:01.214 21:10:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:01.472 [ 00:14:01.472 { 00:14:01.472 "name": "BaseBdev2", 00:14:01.472 "aliases": [ 00:14:01.472 "3b527512-8ba3-4a39-8acd-7ef8ef031eb9" 00:14:01.472 ], 00:14:01.472 "product_name": "Malloc disk", 00:14:01.472 "block_size": 512, 00:14:01.472 "num_blocks": 65536, 00:14:01.472 "uuid": "3b527512-8ba3-4a39-8acd-7ef8ef031eb9", 00:14:01.472 "assigned_rate_limits": { 00:14:01.472 "rw_ios_per_sec": 0, 00:14:01.472 "rw_mbytes_per_sec": 0, 00:14:01.472 "r_mbytes_per_sec": 0, 00:14:01.472 "w_mbytes_per_sec": 0 00:14:01.472 }, 00:14:01.472 "claimed": true, 00:14:01.472 "claim_type": "exclusive_write", 00:14:01.472 "zoned": false, 00:14:01.472 "supported_io_types": { 00:14:01.472 "read": true, 00:14:01.472 "write": true, 00:14:01.472 "unmap": true, 00:14:01.472 "write_zeroes": true, 00:14:01.472 "flush": true, 00:14:01.472 "reset": true, 00:14:01.472 "compare": false, 00:14:01.472 "compare_and_write": false, 00:14:01.472 "abort": true, 00:14:01.472 "nvme_admin": false, 00:14:01.472 "nvme_io": false 00:14:01.472 }, 00:14:01.472 "memory_domains": [ 00:14:01.472 { 00:14:01.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.472 "dma_device_type": 2 00:14:01.472 } 00:14:01.472 ], 00:14:01.472 "driver_specific": {} 00:14:01.472 } 00:14:01.472 ] 00:14:01.472 21:10:23 -- common/autotest_common.sh@895 -- # return 0 00:14:01.472 21:10:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:01.472 21:10:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:01.472 21:10:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:01.472 21:10:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:01.472 21:10:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:01.472 21:10:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:01.472 21:10:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:01.472 21:10:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:01.472 21:10:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:01.472 21:10:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:01.472 21:10:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:01.472 21:10:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:01.472 21:10:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.472 21:10:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.730 21:10:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:01.730 "name": "Existed_Raid", 00:14:01.730 "uuid": "3f9dd86f-d419-4e8b-927b-0c1992c053fd", 00:14:01.730 "strip_size_kb": 64, 00:14:01.730 "state": "online", 00:14:01.730 "raid_level": "concat", 00:14:01.730 "superblock": true, 00:14:01.730 "num_base_bdevs": 2, 00:14:01.730 "num_base_bdevs_discovered": 2, 00:14:01.730 "num_base_bdevs_operational": 2, 00:14:01.730 "base_bdevs_list": [ 00:14:01.730 { 00:14:01.730 "name": "BaseBdev1", 00:14:01.730 "uuid": "7d499f03-2083-46aa-91af-038dc470aaf8", 00:14:01.730 "is_configured": true, 00:14:01.730 "data_offset": 2048, 00:14:01.730 "data_size": 63488 00:14:01.730 }, 00:14:01.730 { 00:14:01.730 "name": "BaseBdev2", 00:14:01.730 "uuid": "3b527512-8ba3-4a39-8acd-7ef8ef031eb9", 00:14:01.730 "is_configured": true, 00:14:01.730 "data_offset": 2048, 00:14:01.730 "data_size": 63488 00:14:01.730 } 00:14:01.730 ] 00:14:01.730 }' 00:14:01.730 21:10:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:01.730 21:10:24 -- common/autotest_common.sh@10 -- # set +x 00:14:02.296 21:10:24 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:02.554 [2024-06-07 21:10:25.121447] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:02.554 [2024-06-07 21:10:25.121492] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.554 [2024-06-07 21:10:25.121598] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.554 21:10:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.812 21:10:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:02.812 "name": "Existed_Raid", 00:14:02.812 "uuid": "3f9dd86f-d419-4e8b-927b-0c1992c053fd", 00:14:02.812 "strip_size_kb": 64, 00:14:02.812 "state": "offline", 00:14:02.812 "raid_level": "concat", 00:14:02.812 "superblock": true, 00:14:02.812 "num_base_bdevs": 2, 00:14:02.812 "num_base_bdevs_discovered": 1, 00:14:02.812 "num_base_bdevs_operational": 1, 00:14:02.812 "base_bdevs_list": [ 00:14:02.812 { 00:14:02.812 "name": null, 00:14:02.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.812 "is_configured": false, 00:14:02.812 "data_offset": 2048, 00:14:02.812 "data_size": 63488 00:14:02.812 }, 00:14:02.812 { 00:14:02.812 "name": "BaseBdev2", 00:14:02.812 "uuid": "3b527512-8ba3-4a39-8acd-7ef8ef031eb9", 00:14:02.812 "is_configured": true, 00:14:02.812 "data_offset": 2048, 00:14:02.812 "data_size": 63488 00:14:02.812 } 00:14:02.812 ] 00:14:02.812 }' 00:14:02.812 21:10:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:02.812 21:10:25 -- common/autotest_common.sh@10 -- # set +x 00:14:03.747 21:10:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:03.747 21:10:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:03.747 21:10:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.747 21:10:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:03.747 21:10:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:03.747 21:10:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:03.747 21:10:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:04.005 [2024-06-07 21:10:26.555681] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:04.005 [2024-06-07 21:10:26.555797] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:14:04.005 21:10:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:04.005 21:10:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:04.005 21:10:26 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.005 21:10:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:04.264 21:10:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:04.264 21:10:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:04.264 21:10:26 -- bdev/bdev_raid.sh@287 -- # killprocess 126470 00:14:04.264 21:10:26 -- common/autotest_common.sh@926 -- # '[' -z 126470 ']' 00:14:04.264 21:10:26 -- common/autotest_common.sh@930 -- # kill -0 126470 00:14:04.264 21:10:26 -- common/autotest_common.sh@931 -- # uname 00:14:04.264 21:10:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:04.264 21:10:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126470 00:14:04.264 killing process with pid 126470 00:14:04.264 21:10:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:04.264 21:10:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:04.264 21:10:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126470' 00:14:04.264 21:10:26 -- common/autotest_common.sh@945 -- # kill 126470 00:14:04.264 21:10:26 -- common/autotest_common.sh@950 -- # wait 126470 00:14:04.264 [2024-06-07 21:10:26.860323] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.264 [2024-06-07 21:10:26.860443] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.523 ************************************ 00:14:04.523 END TEST raid_state_function_test_sb 00:14:04.523 ************************************ 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:04.523 00:14:04.523 real 0m10.068s 00:14:04.523 user 0m18.435s 00:14:04.523 sys 0m1.219s 00:14:04.523 21:10:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.523 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:14:04.523 21:10:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:04.523 21:10:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:04.523 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:14:04.523 ************************************ 00:14:04.523 START TEST raid_superblock_test 00:14:04.523 ************************************ 00:14:04.523 21:10:27 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@357 -- # raid_pid=126810 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@358 -- # waitforlisten 126810 /var/tmp/spdk-raid.sock 00:14:04.523 21:10:27 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:04.523 21:10:27 -- common/autotest_common.sh@819 -- # '[' -z 126810 ']' 00:14:04.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:04.523 21:10:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:04.523 21:10:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:04.523 21:10:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:04.523 21:10:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:04.523 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:14:04.782 [2024-06-07 21:10:27.209849] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:04.782 [2024-06-07 21:10:27.210665] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126810 ] 00:14:04.782 [2024-06-07 21:10:27.377846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.051 [2024-06-07 21:10:27.462341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.051 [2024-06-07 21:10:27.516115] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.617 21:10:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:05.617 21:10:28 -- common/autotest_common.sh@852 -- # return 0 00:14:05.617 21:10:28 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:05.617 21:10:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:05.617 21:10:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:05.617 21:10:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:05.617 21:10:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:05.617 21:10:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:05.617 21:10:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:05.617 21:10:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:05.617 21:10:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:05.876 malloc1 00:14:05.876 21:10:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:05.876 [2024-06-07 21:10:28.546655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:05.876 [2024-06-07 21:10:28.546773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.876 [2024-06-07 21:10:28.546817] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:14:05.876 [2024-06-07 21:10:28.546870] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.876 [2024-06-07 21:10:28.549955] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.876 [2024-06-07 21:10:28.550035] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:06.135 pt1 00:14:06.135 21:10:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:06.135 21:10:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:06.135 21:10:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:06.135 21:10:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:06.135 21:10:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:06.135 21:10:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:06.135 21:10:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:06.135 21:10:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:06.135 21:10:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:06.393 malloc2 00:14:06.394 21:10:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:06.394 [2024-06-07 21:10:29.041788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:06.394 [2024-06-07 21:10:29.041917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.394 [2024-06-07 21:10:29.041961] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:14:06.394 [2024-06-07 21:10:29.042018] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.394 [2024-06-07 21:10:29.044479] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.394 [2024-06-07 21:10:29.044528] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:06.394 pt2 00:14:06.394 21:10:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:06.394 21:10:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:06.394 21:10:29 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:14:06.652 [2024-06-07 21:10:29.250034] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:06.652 [2024-06-07 21:10:29.252562] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:06.652 [2024-06-07 21:10:29.252882] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:14:06.652 [2024-06-07 21:10:29.252900] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:06.652 [2024-06-07 21:10:29.253073] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:06.652 [2024-06-07 21:10:29.253579] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:14:06.652 [2024-06-07 21:10:29.253615] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:14:06.652 [2024-06-07 21:10:29.253870] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.652 21:10:29 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:06.652 21:10:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:06.652 21:10:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:06.652 21:10:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:06.652 21:10:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:06.652 21:10:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:06.652 21:10:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:06.652 21:10:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:06.652 21:10:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:06.652 21:10:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:06.652 21:10:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.652 21:10:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.910 21:10:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:06.910 "name": "raid_bdev1", 00:14:06.910 "uuid": "e9ab4724-b6fa-4f55-8406-ac08854e013c", 00:14:06.910 "strip_size_kb": 64, 00:14:06.910 "state": "online", 00:14:06.910 "raid_level": "concat", 00:14:06.910 "superblock": true, 00:14:06.910 "num_base_bdevs": 2, 00:14:06.910 "num_base_bdevs_discovered": 2, 00:14:06.910 "num_base_bdevs_operational": 2, 00:14:06.910 "base_bdevs_list": [ 00:14:06.910 { 00:14:06.910 "name": "pt1", 00:14:06.910 "uuid": "f2e961bb-1499-5dd5-b387-d1fad9728d45", 00:14:06.910 "is_configured": true, 00:14:06.910 "data_offset": 2048, 00:14:06.910 "data_size": 63488 00:14:06.910 }, 00:14:06.910 { 00:14:06.911 "name": "pt2", 00:14:06.911 "uuid": "5d214af8-ca12-5a02-95e2-34a1e08fe51b", 00:14:06.911 "is_configured": true, 00:14:06.911 "data_offset": 2048, 00:14:06.911 "data_size": 63488 00:14:06.911 } 00:14:06.911 ] 00:14:06.911 }' 00:14:06.911 21:10:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:06.911 21:10:29 -- common/autotest_common.sh@10 -- # set +x 00:14:07.844 21:10:30 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:07.844 21:10:30 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:07.844 [2024-06-07 21:10:30.442448] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.844 21:10:30 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e9ab4724-b6fa-4f55-8406-ac08854e013c 00:14:07.844 21:10:30 -- bdev/bdev_raid.sh@380 -- # '[' -z e9ab4724-b6fa-4f55-8406-ac08854e013c ']' 00:14:07.844 21:10:30 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:08.102 [2024-06-07 21:10:30.698219] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:08.102 [2024-06-07 21:10:30.698257] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.102 [2024-06-07 21:10:30.698410] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.102 [2024-06-07 21:10:30.698480] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.102 [2024-06-07 21:10:30.698493] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:14:08.102 21:10:30 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.102 21:10:30 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:08.360 21:10:30 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:08.360 21:10:30 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:08.360 21:10:30 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:08.360 21:10:30 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:08.618 21:10:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:08.618 21:10:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:08.877 21:10:31 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:08.877 21:10:31 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:09.137 21:10:31 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:09.137 21:10:31 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:09.137 21:10:31 -- common/autotest_common.sh@640 -- # local es=0 00:14:09.137 21:10:31 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:09.137 21:10:31 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:09.137 21:10:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:09.137 21:10:31 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:09.137 21:10:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:09.137 21:10:31 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:09.137 21:10:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:09.137 21:10:31 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:09.137 21:10:31 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:09.137 21:10:31 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:09.137 [2024-06-07 21:10:31.806493] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:09.137 [2024-06-07 21:10:31.808573] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:09.137 [2024-06-07 21:10:31.808664] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:09.137 [2024-06-07 21:10:31.808790] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:09.137 [2024-06-07 21:10:31.808857] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:09.137 [2024-06-07 21:10:31.808877] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:14:09.137 request: 00:14:09.137 { 00:14:09.137 "name": "raid_bdev1", 00:14:09.137 "raid_level": "concat", 00:14:09.137 "base_bdevs": [ 00:14:09.137 "malloc1", 00:14:09.137 "malloc2" 00:14:09.137 ], 00:14:09.137 "superblock": false, 00:14:09.137 "strip_size_kb": 64, 00:14:09.137 "method": "bdev_raid_create", 00:14:09.137 "req_id": 1 00:14:09.137 } 00:14:09.137 Got JSON-RPC error response 00:14:09.137 response: 00:14:09.137 { 00:14:09.137 "code": -17, 00:14:09.137 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:09.137 } 00:14:09.395 21:10:31 -- common/autotest_common.sh@643 -- # es=1 00:14:09.395 21:10:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:09.395 21:10:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:09.395 21:10:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:09.395 21:10:31 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.395 21:10:31 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:09.395 21:10:32 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:09.395 21:10:32 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:09.395 21:10:32 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:09.654 [2024-06-07 21:10:32.250509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:09.654 [2024-06-07 21:10:32.250683] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.654 [2024-06-07 21:10:32.250726] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:09.654 [2024-06-07 21:10:32.250755] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.654 [2024-06-07 21:10:32.253041] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.654 [2024-06-07 21:10:32.253122] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:09.654 [2024-06-07 21:10:32.253207] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:09.654 [2024-06-07 21:10:32.253280] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:09.654 pt1 00:14:09.654 21:10:32 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:14:09.654 21:10:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:09.654 21:10:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:09.654 21:10:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:09.654 21:10:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:09.654 21:10:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:09.654 21:10:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:09.654 21:10:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:09.654 21:10:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:09.654 21:10:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:09.654 21:10:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.654 21:10:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.912 21:10:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:09.912 "name": "raid_bdev1", 00:14:09.912 "uuid": "e9ab4724-b6fa-4f55-8406-ac08854e013c", 00:14:09.912 "strip_size_kb": 64, 00:14:09.912 "state": "configuring", 00:14:09.912 "raid_level": "concat", 00:14:09.912 "superblock": true, 00:14:09.912 "num_base_bdevs": 2, 00:14:09.912 "num_base_bdevs_discovered": 1, 00:14:09.912 "num_base_bdevs_operational": 2, 00:14:09.912 "base_bdevs_list": [ 00:14:09.912 { 00:14:09.912 "name": "pt1", 00:14:09.912 "uuid": "f2e961bb-1499-5dd5-b387-d1fad9728d45", 00:14:09.912 "is_configured": true, 00:14:09.912 "data_offset": 2048, 00:14:09.912 "data_size": 63488 00:14:09.912 }, 00:14:09.912 { 00:14:09.912 "name": null, 00:14:09.912 "uuid": "5d214af8-ca12-5a02-95e2-34a1e08fe51b", 00:14:09.912 "is_configured": false, 00:14:09.912 "data_offset": 2048, 00:14:09.912 "data_size": 63488 00:14:09.912 } 00:14:09.912 ] 00:14:09.912 }' 00:14:09.912 21:10:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:09.912 21:10:32 -- common/autotest_common.sh@10 -- # set +x 00:14:10.479 21:10:33 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:10.479 21:10:33 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:10.479 21:10:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:10.479 21:10:33 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:10.736 [2024-06-07 21:10:33.394848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:10.736 [2024-06-07 21:10:33.395015] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.736 [2024-06-07 21:10:33.395056] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:10.736 [2024-06-07 21:10:33.395099] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.736 [2024-06-07 21:10:33.395615] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.736 [2024-06-07 21:10:33.395665] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:10.736 [2024-06-07 21:10:33.395807] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:10.736 [2024-06-07 21:10:33.395836] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:10.736 [2024-06-07 21:10:33.395961] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:14:10.736 [2024-06-07 21:10:33.395976] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:10.736 [2024-06-07 21:10:33.396083] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:14:10.736 [2024-06-07 21:10:33.396444] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:14:10.736 [2024-06-07 21:10:33.396468] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:14:10.736 [2024-06-07 21:10:33.396622] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.736 pt2 00:14:10.994 21:10:33 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:10.994 21:10:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:10.994 21:10:33 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:10.994 21:10:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:10.994 21:10:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:10.994 21:10:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:10.994 21:10:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:10.994 21:10:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:10.994 21:10:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:10.994 21:10:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:10.994 21:10:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:10.994 21:10:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:10.994 21:10:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.994 21:10:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.251 21:10:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:11.251 "name": "raid_bdev1", 00:14:11.251 "uuid": "e9ab4724-b6fa-4f55-8406-ac08854e013c", 00:14:11.251 "strip_size_kb": 64, 00:14:11.251 "state": "online", 00:14:11.251 "raid_level": "concat", 00:14:11.251 "superblock": true, 00:14:11.251 "num_base_bdevs": 2, 00:14:11.252 "num_base_bdevs_discovered": 2, 00:14:11.252 "num_base_bdevs_operational": 2, 00:14:11.252 "base_bdevs_list": [ 00:14:11.252 { 00:14:11.252 "name": "pt1", 00:14:11.252 "uuid": "f2e961bb-1499-5dd5-b387-d1fad9728d45", 00:14:11.252 "is_configured": true, 00:14:11.252 "data_offset": 2048, 00:14:11.252 "data_size": 63488 00:14:11.252 }, 00:14:11.252 { 00:14:11.252 "name": "pt2", 00:14:11.252 "uuid": "5d214af8-ca12-5a02-95e2-34a1e08fe51b", 00:14:11.252 "is_configured": true, 00:14:11.252 "data_offset": 2048, 00:14:11.252 "data_size": 63488 00:14:11.252 } 00:14:11.252 ] 00:14:11.252 }' 00:14:11.252 21:10:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:11.252 21:10:33 -- common/autotest_common.sh@10 -- # set +x 00:14:11.818 21:10:34 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:11.818 21:10:34 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:12.077 [2024-06-07 21:10:34.611374] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.077 21:10:34 -- bdev/bdev_raid.sh@430 -- # '[' e9ab4724-b6fa-4f55-8406-ac08854e013c '!=' e9ab4724-b6fa-4f55-8406-ac08854e013c ']' 00:14:12.077 21:10:34 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:14:12.077 21:10:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:12.077 21:10:34 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:12.077 21:10:34 -- bdev/bdev_raid.sh@511 -- # killprocess 126810 00:14:12.077 21:10:34 -- common/autotest_common.sh@926 -- # '[' -z 126810 ']' 00:14:12.077 21:10:34 -- common/autotest_common.sh@930 -- # kill -0 126810 00:14:12.077 21:10:34 -- common/autotest_common.sh@931 -- # uname 00:14:12.077 21:10:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:12.077 21:10:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126810 00:14:12.077 killing process with pid 126810 00:14:12.077 21:10:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:12.077 21:10:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:12.077 21:10:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126810' 00:14:12.077 21:10:34 -- common/autotest_common.sh@945 -- # kill 126810 00:14:12.077 21:10:34 -- common/autotest_common.sh@950 -- # wait 126810 00:14:12.077 [2024-06-07 21:10:34.647428] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.077 [2024-06-07 21:10:34.647570] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.077 [2024-06-07 21:10:34.647634] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.077 [2024-06-07 21:10:34.647652] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:14:12.077 [2024-06-07 21:10:34.668035] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.336 ************************************ 00:14:12.336 END TEST raid_superblock_test 00:14:12.336 ************************************ 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:12.336 00:14:12.336 real 0m7.747s 00:14:12.336 user 0m14.063s 00:14:12.336 sys 0m0.982s 00:14:12.336 21:10:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.336 21:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:14:12.336 21:10:34 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:12.336 21:10:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:12.336 21:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:12.336 ************************************ 00:14:12.336 START TEST raid_state_function_test 00:14:12.336 ************************************ 00:14:12.336 21:10:34 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@226 -- # raid_pid=127065 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127065' 00:14:12.336 Process raid pid: 127065 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127065 /var/tmp/spdk-raid.sock 00:14:12.336 21:10:34 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:12.336 21:10:34 -- common/autotest_common.sh@819 -- # '[' -z 127065 ']' 00:14:12.336 21:10:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:12.336 21:10:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:12.336 21:10:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:12.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:12.336 21:10:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:12.336 21:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:12.595 [2024-06-07 21:10:35.028122] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:12.595 [2024-06-07 21:10:35.028364] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.595 [2024-06-07 21:10:35.192140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.853 [2024-06-07 21:10:35.283834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.853 [2024-06-07 21:10:35.337231] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.419 21:10:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:13.419 21:10:35 -- common/autotest_common.sh@852 -- # return 0 00:14:13.419 21:10:35 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:13.686 [2024-06-07 21:10:36.116947] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:13.686 [2024-06-07 21:10:36.117048] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:13.686 [2024-06-07 21:10:36.117078] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:13.686 [2024-06-07 21:10:36.117097] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:13.686 21:10:36 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:13.686 21:10:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:13.686 21:10:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:13.686 21:10:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:13.686 21:10:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:13.686 21:10:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:13.686 21:10:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:13.686 21:10:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:13.686 21:10:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:13.686 21:10:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:13.686 21:10:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.686 21:10:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.971 21:10:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:13.971 "name": "Existed_Raid", 00:14:13.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.971 "strip_size_kb": 0, 00:14:13.971 "state": "configuring", 00:14:13.971 "raid_level": "raid1", 00:14:13.971 "superblock": false, 00:14:13.971 "num_base_bdevs": 2, 00:14:13.971 "num_base_bdevs_discovered": 0, 00:14:13.971 "num_base_bdevs_operational": 2, 00:14:13.971 "base_bdevs_list": [ 00:14:13.972 { 00:14:13.972 "name": "BaseBdev1", 00:14:13.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.972 "is_configured": false, 00:14:13.972 "data_offset": 0, 00:14:13.972 "data_size": 0 00:14:13.972 }, 00:14:13.972 { 00:14:13.972 "name": "BaseBdev2", 00:14:13.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.972 "is_configured": false, 00:14:13.972 "data_offset": 0, 00:14:13.972 "data_size": 0 00:14:13.972 } 00:14:13.972 ] 00:14:13.972 }' 00:14:13.972 21:10:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:13.972 21:10:36 -- common/autotest_common.sh@10 -- # set +x 00:14:14.539 21:10:37 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:14.798 [2024-06-07 21:10:37.293201] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:14.798 [2024-06-07 21:10:37.293588] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:14.798 21:10:37 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:15.056 [2024-06-07 21:10:37.561262] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.056 [2024-06-07 21:10:37.561494] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.056 [2024-06-07 21:10:37.561668] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.056 [2024-06-07 21:10:37.561851] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.056 21:10:37 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:15.315 [2024-06-07 21:10:37.824127] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.315 BaseBdev1 00:14:15.315 21:10:37 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:15.315 21:10:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:15.315 21:10:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:15.315 21:10:37 -- common/autotest_common.sh@889 -- # local i 00:14:15.315 21:10:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:15.315 21:10:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:15.315 21:10:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:15.574 21:10:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:15.574 [ 00:14:15.574 { 00:14:15.574 "name": "BaseBdev1", 00:14:15.574 "aliases": [ 00:14:15.574 "4c8775eb-718f-4956-a70a-b2aad8abf33e" 00:14:15.574 ], 00:14:15.574 "product_name": "Malloc disk", 00:14:15.574 "block_size": 512, 00:14:15.574 "num_blocks": 65536, 00:14:15.574 "uuid": "4c8775eb-718f-4956-a70a-b2aad8abf33e", 00:14:15.574 "assigned_rate_limits": { 00:14:15.574 "rw_ios_per_sec": 0, 00:14:15.574 "rw_mbytes_per_sec": 0, 00:14:15.574 "r_mbytes_per_sec": 0, 00:14:15.574 "w_mbytes_per_sec": 0 00:14:15.574 }, 00:14:15.574 "claimed": true, 00:14:15.574 "claim_type": "exclusive_write", 00:14:15.574 "zoned": false, 00:14:15.574 "supported_io_types": { 00:14:15.574 "read": true, 00:14:15.574 "write": true, 00:14:15.574 "unmap": true, 00:14:15.574 "write_zeroes": true, 00:14:15.574 "flush": true, 00:14:15.574 "reset": true, 00:14:15.574 "compare": false, 00:14:15.574 "compare_and_write": false, 00:14:15.574 "abort": true, 00:14:15.574 "nvme_admin": false, 00:14:15.574 "nvme_io": false 00:14:15.574 }, 00:14:15.574 "memory_domains": [ 00:14:15.574 { 00:14:15.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.574 "dma_device_type": 2 00:14:15.574 } 00:14:15.574 ], 00:14:15.574 "driver_specific": {} 00:14:15.574 } 00:14:15.574 ] 00:14:15.574 21:10:38 -- common/autotest_common.sh@895 -- # return 0 00:14:15.574 21:10:38 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:15.574 21:10:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:15.574 21:10:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:15.574 21:10:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:15.574 21:10:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:15.574 21:10:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:15.574 21:10:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:15.574 21:10:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:15.574 21:10:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:15.574 21:10:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:15.574 21:10:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.574 21:10:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.833 21:10:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:15.833 "name": "Existed_Raid", 00:14:15.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.833 "strip_size_kb": 0, 00:14:15.833 "state": "configuring", 00:14:15.833 "raid_level": "raid1", 00:14:15.833 "superblock": false, 00:14:15.833 "num_base_bdevs": 2, 00:14:15.833 "num_base_bdevs_discovered": 1, 00:14:15.833 "num_base_bdevs_operational": 2, 00:14:15.833 "base_bdevs_list": [ 00:14:15.833 { 00:14:15.833 "name": "BaseBdev1", 00:14:15.833 "uuid": "4c8775eb-718f-4956-a70a-b2aad8abf33e", 00:14:15.833 "is_configured": true, 00:14:15.833 "data_offset": 0, 00:14:15.833 "data_size": 65536 00:14:15.833 }, 00:14:15.833 { 00:14:15.833 "name": "BaseBdev2", 00:14:15.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.833 "is_configured": false, 00:14:15.833 "data_offset": 0, 00:14:15.833 "data_size": 0 00:14:15.833 } 00:14:15.833 ] 00:14:15.833 }' 00:14:15.833 21:10:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:15.833 21:10:38 -- common/autotest_common.sh@10 -- # set +x 00:14:16.768 21:10:39 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:16.768 [2024-06-07 21:10:39.400560] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:16.768 [2024-06-07 21:10:39.400786] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:16.768 21:10:39 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:16.768 21:10:39 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:17.026 [2024-06-07 21:10:39.600637] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.026 [2024-06-07 21:10:39.602825] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:17.026 [2024-06-07 21:10:39.603036] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:17.026 21:10:39 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:17.026 21:10:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:17.026 21:10:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:17.026 21:10:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:17.026 21:10:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:17.026 21:10:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:17.026 21:10:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:17.026 21:10:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:17.026 21:10:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:17.026 21:10:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:17.026 21:10:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:17.026 21:10:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:17.026 21:10:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.027 21:10:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.286 21:10:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:17.286 "name": "Existed_Raid", 00:14:17.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.286 "strip_size_kb": 0, 00:14:17.286 "state": "configuring", 00:14:17.286 "raid_level": "raid1", 00:14:17.286 "superblock": false, 00:14:17.286 "num_base_bdevs": 2, 00:14:17.286 "num_base_bdevs_discovered": 1, 00:14:17.286 "num_base_bdevs_operational": 2, 00:14:17.286 "base_bdevs_list": [ 00:14:17.286 { 00:14:17.286 "name": "BaseBdev1", 00:14:17.286 "uuid": "4c8775eb-718f-4956-a70a-b2aad8abf33e", 00:14:17.286 "is_configured": true, 00:14:17.286 "data_offset": 0, 00:14:17.286 "data_size": 65536 00:14:17.286 }, 00:14:17.286 { 00:14:17.286 "name": "BaseBdev2", 00:14:17.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.286 "is_configured": false, 00:14:17.286 "data_offset": 0, 00:14:17.286 "data_size": 0 00:14:17.286 } 00:14:17.286 ] 00:14:17.286 }' 00:14:17.286 21:10:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:17.286 21:10:39 -- common/autotest_common.sh@10 -- # set +x 00:14:17.853 21:10:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:18.111 [2024-06-07 21:10:40.733083] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.111 [2024-06-07 21:10:40.733477] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:18.111 [2024-06-07 21:10:40.733622] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:18.111 [2024-06-07 21:10:40.733876] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:14:18.111 [2024-06-07 21:10:40.734553] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:18.111 [2024-06-07 21:10:40.734722] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:14:18.111 [2024-06-07 21:10:40.735211] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.111 BaseBdev2 00:14:18.111 21:10:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:18.111 21:10:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:18.111 21:10:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:18.111 21:10:40 -- common/autotest_common.sh@889 -- # local i 00:14:18.111 21:10:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:18.111 21:10:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:18.111 21:10:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:18.370 21:10:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:18.629 [ 00:14:18.629 { 00:14:18.629 "name": "BaseBdev2", 00:14:18.629 "aliases": [ 00:14:18.629 "7bb96ac8-700d-444a-9b37-9c5f0796b38a" 00:14:18.629 ], 00:14:18.629 "product_name": "Malloc disk", 00:14:18.629 "block_size": 512, 00:14:18.629 "num_blocks": 65536, 00:14:18.629 "uuid": "7bb96ac8-700d-444a-9b37-9c5f0796b38a", 00:14:18.629 "assigned_rate_limits": { 00:14:18.629 "rw_ios_per_sec": 0, 00:14:18.629 "rw_mbytes_per_sec": 0, 00:14:18.629 "r_mbytes_per_sec": 0, 00:14:18.629 "w_mbytes_per_sec": 0 00:14:18.629 }, 00:14:18.629 "claimed": true, 00:14:18.629 "claim_type": "exclusive_write", 00:14:18.629 "zoned": false, 00:14:18.629 "supported_io_types": { 00:14:18.629 "read": true, 00:14:18.629 "write": true, 00:14:18.629 "unmap": true, 00:14:18.629 "write_zeroes": true, 00:14:18.629 "flush": true, 00:14:18.629 "reset": true, 00:14:18.629 "compare": false, 00:14:18.629 "compare_and_write": false, 00:14:18.629 "abort": true, 00:14:18.629 "nvme_admin": false, 00:14:18.629 "nvme_io": false 00:14:18.629 }, 00:14:18.629 "memory_domains": [ 00:14:18.629 { 00:14:18.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.629 "dma_device_type": 2 00:14:18.629 } 00:14:18.629 ], 00:14:18.629 "driver_specific": {} 00:14:18.629 } 00:14:18.629 ] 00:14:18.629 21:10:41 -- common/autotest_common.sh@895 -- # return 0 00:14:18.629 21:10:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:18.629 21:10:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:18.629 21:10:41 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:18.629 21:10:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:18.629 21:10:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:18.629 21:10:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:18.629 21:10:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:18.629 21:10:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:18.629 21:10:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:18.629 21:10:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:18.629 21:10:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:18.629 21:10:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:18.629 21:10:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.629 21:10:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.887 21:10:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:18.887 "name": "Existed_Raid", 00:14:18.887 "uuid": "ef76ca58-8b61-47c7-bf17-f3951109fbfc", 00:14:18.887 "strip_size_kb": 0, 00:14:18.887 "state": "online", 00:14:18.887 "raid_level": "raid1", 00:14:18.887 "superblock": false, 00:14:18.887 "num_base_bdevs": 2, 00:14:18.887 "num_base_bdevs_discovered": 2, 00:14:18.887 "num_base_bdevs_operational": 2, 00:14:18.887 "base_bdevs_list": [ 00:14:18.887 { 00:14:18.887 "name": "BaseBdev1", 00:14:18.887 "uuid": "4c8775eb-718f-4956-a70a-b2aad8abf33e", 00:14:18.887 "is_configured": true, 00:14:18.887 "data_offset": 0, 00:14:18.887 "data_size": 65536 00:14:18.887 }, 00:14:18.887 { 00:14:18.887 "name": "BaseBdev2", 00:14:18.887 "uuid": "7bb96ac8-700d-444a-9b37-9c5f0796b38a", 00:14:18.887 "is_configured": true, 00:14:18.887 "data_offset": 0, 00:14:18.887 "data_size": 65536 00:14:18.887 } 00:14:18.887 ] 00:14:18.887 }' 00:14:18.887 21:10:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:18.887 21:10:41 -- common/autotest_common.sh@10 -- # set +x 00:14:19.453 21:10:42 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:19.712 [2024-06-07 21:10:42.317516] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.712 21:10:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.970 21:10:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:19.970 "name": "Existed_Raid", 00:14:19.970 "uuid": "ef76ca58-8b61-47c7-bf17-f3951109fbfc", 00:14:19.970 "strip_size_kb": 0, 00:14:19.970 "state": "online", 00:14:19.970 "raid_level": "raid1", 00:14:19.970 "superblock": false, 00:14:19.970 "num_base_bdevs": 2, 00:14:19.970 "num_base_bdevs_discovered": 1, 00:14:19.970 "num_base_bdevs_operational": 1, 00:14:19.970 "base_bdevs_list": [ 00:14:19.970 { 00:14:19.970 "name": null, 00:14:19.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.970 "is_configured": false, 00:14:19.970 "data_offset": 0, 00:14:19.970 "data_size": 65536 00:14:19.970 }, 00:14:19.970 { 00:14:19.970 "name": "BaseBdev2", 00:14:19.970 "uuid": "7bb96ac8-700d-444a-9b37-9c5f0796b38a", 00:14:19.970 "is_configured": true, 00:14:19.970 "data_offset": 0, 00:14:19.970 "data_size": 65536 00:14:19.970 } 00:14:19.970 ] 00:14:19.970 }' 00:14:19.970 21:10:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:19.970 21:10:42 -- common/autotest_common.sh@10 -- # set +x 00:14:20.540 21:10:43 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:20.540 21:10:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:20.540 21:10:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:20.540 21:10:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.825 21:10:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:20.825 21:10:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:20.825 21:10:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:21.100 [2024-06-07 21:10:43.624206] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:21.100 [2024-06-07 21:10:43.624468] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.100 [2024-06-07 21:10:43.624655] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.100 [2024-06-07 21:10:43.634409] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.100 [2024-06-07 21:10:43.634598] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:14:21.100 21:10:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:21.100 21:10:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:21.100 21:10:43 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.100 21:10:43 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:21.359 21:10:43 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:21.359 21:10:43 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:21.359 21:10:43 -- bdev/bdev_raid.sh@287 -- # killprocess 127065 00:14:21.359 21:10:43 -- common/autotest_common.sh@926 -- # '[' -z 127065 ']' 00:14:21.359 21:10:43 -- common/autotest_common.sh@930 -- # kill -0 127065 00:14:21.359 21:10:43 -- common/autotest_common.sh@931 -- # uname 00:14:21.359 21:10:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:21.359 21:10:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127065 00:14:21.359 killing process with pid 127065 00:14:21.359 21:10:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:21.359 21:10:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:21.359 21:10:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127065' 00:14:21.359 21:10:43 -- common/autotest_common.sh@945 -- # kill 127065 00:14:21.359 21:10:43 -- common/autotest_common.sh@950 -- # wait 127065 00:14:21.359 [2024-06-07 21:10:43.865016] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:21.359 [2024-06-07 21:10:43.865115] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:21.617 00:14:21.617 real 0m9.141s 00:14:21.617 user 0m16.877s 00:14:21.617 ************************************ 00:14:21.617 END TEST raid_state_function_test 00:14:21.617 ************************************ 00:14:21.617 sys 0m1.002s 00:14:21.617 21:10:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.617 21:10:44 -- common/autotest_common.sh@10 -- # set +x 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:14:21.617 21:10:44 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:21.617 21:10:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:21.617 21:10:44 -- common/autotest_common.sh@10 -- # set +x 00:14:21.617 ************************************ 00:14:21.617 START TEST raid_state_function_test_sb 00:14:21.617 ************************************ 00:14:21.617 21:10:44 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@226 -- # raid_pid=127391 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127391' 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:21.617 Process raid pid: 127391 00:14:21.617 21:10:44 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127391 /var/tmp/spdk-raid.sock 00:14:21.617 21:10:44 -- common/autotest_common.sh@819 -- # '[' -z 127391 ']' 00:14:21.618 21:10:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:21.618 21:10:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:21.618 21:10:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:21.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:21.618 21:10:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:21.618 21:10:44 -- common/autotest_common.sh@10 -- # set +x 00:14:21.618 [2024-06-07 21:10:44.213857] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:21.618 [2024-06-07 21:10:44.214742] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.875 [2024-06-07 21:10:44.369943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.875 [2024-06-07 21:10:44.441075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.875 [2024-06-07 21:10:44.498747] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.808 21:10:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:22.808 21:10:45 -- common/autotest_common.sh@852 -- # return 0 00:14:22.808 21:10:45 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:22.808 [2024-06-07 21:10:45.311562] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.808 [2024-06-07 21:10:45.311819] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.808 [2024-06-07 21:10:45.311929] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.808 [2024-06-07 21:10:45.311984] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.808 21:10:45 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:22.808 21:10:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:22.808 21:10:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:22.808 21:10:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:22.808 21:10:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:22.808 21:10:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:22.808 21:10:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:22.808 21:10:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:22.808 21:10:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:22.808 21:10:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:22.808 21:10:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.808 21:10:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.066 21:10:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:23.066 "name": "Existed_Raid", 00:14:23.066 "uuid": "8bcf2da8-30b6-4dd4-9649-83eba2940929", 00:14:23.066 "strip_size_kb": 0, 00:14:23.066 "state": "configuring", 00:14:23.066 "raid_level": "raid1", 00:14:23.066 "superblock": true, 00:14:23.066 "num_base_bdevs": 2, 00:14:23.066 "num_base_bdevs_discovered": 0, 00:14:23.066 "num_base_bdevs_operational": 2, 00:14:23.066 "base_bdevs_list": [ 00:14:23.066 { 00:14:23.066 "name": "BaseBdev1", 00:14:23.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.066 "is_configured": false, 00:14:23.066 "data_offset": 0, 00:14:23.066 "data_size": 0 00:14:23.066 }, 00:14:23.066 { 00:14:23.066 "name": "BaseBdev2", 00:14:23.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.066 "is_configured": false, 00:14:23.066 "data_offset": 0, 00:14:23.066 "data_size": 0 00:14:23.066 } 00:14:23.066 ] 00:14:23.066 }' 00:14:23.066 21:10:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:23.066 21:10:45 -- common/autotest_common.sh@10 -- # set +x 00:14:23.630 21:10:46 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:23.888 [2024-06-07 21:10:46.431702] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.888 [2024-06-07 21:10:46.431920] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:23.888 21:10:46 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:24.146 [2024-06-07 21:10:46.639785] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:24.146 [2024-06-07 21:10:46.640053] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:24.146 [2024-06-07 21:10:46.640165] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:24.146 [2024-06-07 21:10:46.640236] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:24.146 21:10:46 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:24.404 [2024-06-07 21:10:46.875151] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.404 BaseBdev1 00:14:24.404 21:10:46 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:24.404 21:10:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:24.404 21:10:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:24.405 21:10:46 -- common/autotest_common.sh@889 -- # local i 00:14:24.405 21:10:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:24.405 21:10:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:24.405 21:10:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:24.663 21:10:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:24.663 [ 00:14:24.663 { 00:14:24.663 "name": "BaseBdev1", 00:14:24.663 "aliases": [ 00:14:24.663 "9e8a33c8-508e-40cc-9295-814b7d5fe588" 00:14:24.663 ], 00:14:24.663 "product_name": "Malloc disk", 00:14:24.663 "block_size": 512, 00:14:24.663 "num_blocks": 65536, 00:14:24.663 "uuid": "9e8a33c8-508e-40cc-9295-814b7d5fe588", 00:14:24.663 "assigned_rate_limits": { 00:14:24.663 "rw_ios_per_sec": 0, 00:14:24.663 "rw_mbytes_per_sec": 0, 00:14:24.663 "r_mbytes_per_sec": 0, 00:14:24.663 "w_mbytes_per_sec": 0 00:14:24.663 }, 00:14:24.663 "claimed": true, 00:14:24.663 "claim_type": "exclusive_write", 00:14:24.663 "zoned": false, 00:14:24.663 "supported_io_types": { 00:14:24.663 "read": true, 00:14:24.663 "write": true, 00:14:24.663 "unmap": true, 00:14:24.663 "write_zeroes": true, 00:14:24.663 "flush": true, 00:14:24.663 "reset": true, 00:14:24.663 "compare": false, 00:14:24.663 "compare_and_write": false, 00:14:24.663 "abort": true, 00:14:24.663 "nvme_admin": false, 00:14:24.663 "nvme_io": false 00:14:24.663 }, 00:14:24.663 "memory_domains": [ 00:14:24.663 { 00:14:24.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.663 "dma_device_type": 2 00:14:24.663 } 00:14:24.663 ], 00:14:24.663 "driver_specific": {} 00:14:24.663 } 00:14:24.663 ] 00:14:24.922 21:10:47 -- common/autotest_common.sh@895 -- # return 0 00:14:24.922 21:10:47 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:24.922 21:10:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:24.922 21:10:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:24.922 21:10:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:24.922 21:10:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:24.922 21:10:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:24.922 21:10:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:24.922 21:10:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:24.922 21:10:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:24.922 21:10:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:24.922 21:10:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.922 21:10:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.922 21:10:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:24.922 "name": "Existed_Raid", 00:14:24.922 "uuid": "9a431f5b-bc1f-4aba-b06a-2faf6cc7a2b1", 00:14:24.922 "strip_size_kb": 0, 00:14:24.922 "state": "configuring", 00:14:24.922 "raid_level": "raid1", 00:14:24.922 "superblock": true, 00:14:24.922 "num_base_bdevs": 2, 00:14:24.922 "num_base_bdevs_discovered": 1, 00:14:24.922 "num_base_bdevs_operational": 2, 00:14:24.922 "base_bdevs_list": [ 00:14:24.922 { 00:14:24.922 "name": "BaseBdev1", 00:14:24.922 "uuid": "9e8a33c8-508e-40cc-9295-814b7d5fe588", 00:14:24.922 "is_configured": true, 00:14:24.922 "data_offset": 2048, 00:14:24.922 "data_size": 63488 00:14:24.922 }, 00:14:24.923 { 00:14:24.923 "name": "BaseBdev2", 00:14:24.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.923 "is_configured": false, 00:14:24.923 "data_offset": 0, 00:14:24.923 "data_size": 0 00:14:24.923 } 00:14:24.923 ] 00:14:24.923 }' 00:14:24.923 21:10:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:24.923 21:10:47 -- common/autotest_common.sh@10 -- # set +x 00:14:25.858 21:10:48 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:25.858 [2024-06-07 21:10:48.463595] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.858 [2024-06-07 21:10:48.464271] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:25.858 21:10:48 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:25.858 21:10:48 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:26.117 21:10:48 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:26.375 BaseBdev1 00:14:26.376 21:10:48 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:26.376 21:10:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:26.376 21:10:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:26.376 21:10:48 -- common/autotest_common.sh@889 -- # local i 00:14:26.376 21:10:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:26.376 21:10:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:26.376 21:10:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:26.634 21:10:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:26.893 [ 00:14:26.893 { 00:14:26.893 "name": "BaseBdev1", 00:14:26.893 "aliases": [ 00:14:26.893 "e11231e0-70be-4fd3-a22d-e569aa3b13d6" 00:14:26.893 ], 00:14:26.893 "product_name": "Malloc disk", 00:14:26.893 "block_size": 512, 00:14:26.893 "num_blocks": 65536, 00:14:26.893 "uuid": "e11231e0-70be-4fd3-a22d-e569aa3b13d6", 00:14:26.893 "assigned_rate_limits": { 00:14:26.893 "rw_ios_per_sec": 0, 00:14:26.893 "rw_mbytes_per_sec": 0, 00:14:26.893 "r_mbytes_per_sec": 0, 00:14:26.893 "w_mbytes_per_sec": 0 00:14:26.893 }, 00:14:26.893 "claimed": false, 00:14:26.893 "zoned": false, 00:14:26.893 "supported_io_types": { 00:14:26.893 "read": true, 00:14:26.893 "write": true, 00:14:26.893 "unmap": true, 00:14:26.893 "write_zeroes": true, 00:14:26.893 "flush": true, 00:14:26.893 "reset": true, 00:14:26.893 "compare": false, 00:14:26.893 "compare_and_write": false, 00:14:26.893 "abort": true, 00:14:26.893 "nvme_admin": false, 00:14:26.893 "nvme_io": false 00:14:26.893 }, 00:14:26.893 "memory_domains": [ 00:14:26.893 { 00:14:26.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.893 "dma_device_type": 2 00:14:26.893 } 00:14:26.893 ], 00:14:26.893 "driver_specific": {} 00:14:26.893 } 00:14:26.893 ] 00:14:26.893 21:10:49 -- common/autotest_common.sh@895 -- # return 0 00:14:26.893 21:10:49 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:27.152 [2024-06-07 21:10:49.668478] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.152 [2024-06-07 21:10:49.671170] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:27.152 [2024-06-07 21:10:49.671387] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:27.152 21:10:49 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:27.152 21:10:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:27.152 21:10:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:27.152 21:10:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:27.152 21:10:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:27.152 21:10:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:27.152 21:10:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:27.152 21:10:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:27.152 21:10:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:27.152 21:10:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:27.152 21:10:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:27.152 21:10:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:27.152 21:10:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.152 21:10:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.411 21:10:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:27.411 "name": "Existed_Raid", 00:14:27.411 "uuid": "a1b58e58-f339-4d50-8faa-a66256269509", 00:14:27.411 "strip_size_kb": 0, 00:14:27.411 "state": "configuring", 00:14:27.411 "raid_level": "raid1", 00:14:27.411 "superblock": true, 00:14:27.411 "num_base_bdevs": 2, 00:14:27.411 "num_base_bdevs_discovered": 1, 00:14:27.411 "num_base_bdevs_operational": 2, 00:14:27.411 "base_bdevs_list": [ 00:14:27.411 { 00:14:27.411 "name": "BaseBdev1", 00:14:27.411 "uuid": "e11231e0-70be-4fd3-a22d-e569aa3b13d6", 00:14:27.411 "is_configured": true, 00:14:27.411 "data_offset": 2048, 00:14:27.411 "data_size": 63488 00:14:27.411 }, 00:14:27.411 { 00:14:27.411 "name": "BaseBdev2", 00:14:27.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.411 "is_configured": false, 00:14:27.411 "data_offset": 0, 00:14:27.411 "data_size": 0 00:14:27.411 } 00:14:27.411 ] 00:14:27.411 }' 00:14:27.411 21:10:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:27.411 21:10:49 -- common/autotest_common.sh@10 -- # set +x 00:14:27.979 21:10:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:28.236 [2024-06-07 21:10:50.849128] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.236 [2024-06-07 21:10:50.849739] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:14:28.236 [2024-06-07 21:10:50.849908] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:28.236 BaseBdev2 00:14:28.236 [2024-06-07 21:10:50.850208] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:28.236 [2024-06-07 21:10:50.851013] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:14:28.236 [2024-06-07 21:10:50.851214] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:14:28.236 [2024-06-07 21:10:50.851586] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.236 21:10:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:28.236 21:10:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:28.236 21:10:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:28.236 21:10:50 -- common/autotest_common.sh@889 -- # local i 00:14:28.236 21:10:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:28.236 21:10:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:28.236 21:10:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:28.494 21:10:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:28.753 [ 00:14:28.753 { 00:14:28.753 "name": "BaseBdev2", 00:14:28.753 "aliases": [ 00:14:28.753 "c0256175-8472-410b-b1aa-a7e741eef970" 00:14:28.753 ], 00:14:28.753 "product_name": "Malloc disk", 00:14:28.753 "block_size": 512, 00:14:28.753 "num_blocks": 65536, 00:14:28.753 "uuid": "c0256175-8472-410b-b1aa-a7e741eef970", 00:14:28.753 "assigned_rate_limits": { 00:14:28.753 "rw_ios_per_sec": 0, 00:14:28.753 "rw_mbytes_per_sec": 0, 00:14:28.753 "r_mbytes_per_sec": 0, 00:14:28.753 "w_mbytes_per_sec": 0 00:14:28.753 }, 00:14:28.753 "claimed": true, 00:14:28.753 "claim_type": "exclusive_write", 00:14:28.753 "zoned": false, 00:14:28.753 "supported_io_types": { 00:14:28.753 "read": true, 00:14:28.753 "write": true, 00:14:28.753 "unmap": true, 00:14:28.753 "write_zeroes": true, 00:14:28.753 "flush": true, 00:14:28.753 "reset": true, 00:14:28.753 "compare": false, 00:14:28.753 "compare_and_write": false, 00:14:28.753 "abort": true, 00:14:28.753 "nvme_admin": false, 00:14:28.753 "nvme_io": false 00:14:28.753 }, 00:14:28.753 "memory_domains": [ 00:14:28.753 { 00:14:28.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.753 "dma_device_type": 2 00:14:28.753 } 00:14:28.753 ], 00:14:28.753 "driver_specific": {} 00:14:28.754 } 00:14:28.754 ] 00:14:28.754 21:10:51 -- common/autotest_common.sh@895 -- # return 0 00:14:28.754 21:10:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:28.754 21:10:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:28.754 21:10:51 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:28.754 21:10:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:28.754 21:10:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:28.754 21:10:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:28.754 21:10:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:28.754 21:10:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:28.754 21:10:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:28.754 21:10:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:28.754 21:10:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:28.754 21:10:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:28.754 21:10:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.754 21:10:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.012 21:10:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:29.012 "name": "Existed_Raid", 00:14:29.012 "uuid": "a1b58e58-f339-4d50-8faa-a66256269509", 00:14:29.012 "strip_size_kb": 0, 00:14:29.012 "state": "online", 00:14:29.012 "raid_level": "raid1", 00:14:29.012 "superblock": true, 00:14:29.012 "num_base_bdevs": 2, 00:14:29.012 "num_base_bdevs_discovered": 2, 00:14:29.012 "num_base_bdevs_operational": 2, 00:14:29.012 "base_bdevs_list": [ 00:14:29.012 { 00:14:29.012 "name": "BaseBdev1", 00:14:29.012 "uuid": "e11231e0-70be-4fd3-a22d-e569aa3b13d6", 00:14:29.012 "is_configured": true, 00:14:29.012 "data_offset": 2048, 00:14:29.012 "data_size": 63488 00:14:29.012 }, 00:14:29.012 { 00:14:29.012 "name": "BaseBdev2", 00:14:29.012 "uuid": "c0256175-8472-410b-b1aa-a7e741eef970", 00:14:29.012 "is_configured": true, 00:14:29.012 "data_offset": 2048, 00:14:29.012 "data_size": 63488 00:14:29.012 } 00:14:29.012 ] 00:14:29.012 }' 00:14:29.012 21:10:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:29.012 21:10:51 -- common/autotest_common.sh@10 -- # set +x 00:14:29.580 21:10:52 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:29.838 [2024-06-07 21:10:52.441766] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.838 21:10:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.097 21:10:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:30.097 "name": "Existed_Raid", 00:14:30.097 "uuid": "a1b58e58-f339-4d50-8faa-a66256269509", 00:14:30.097 "strip_size_kb": 0, 00:14:30.097 "state": "online", 00:14:30.097 "raid_level": "raid1", 00:14:30.097 "superblock": true, 00:14:30.097 "num_base_bdevs": 2, 00:14:30.097 "num_base_bdevs_discovered": 1, 00:14:30.097 "num_base_bdevs_operational": 1, 00:14:30.097 "base_bdevs_list": [ 00:14:30.097 { 00:14:30.097 "name": null, 00:14:30.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.097 "is_configured": false, 00:14:30.097 "data_offset": 2048, 00:14:30.097 "data_size": 63488 00:14:30.097 }, 00:14:30.097 { 00:14:30.097 "name": "BaseBdev2", 00:14:30.097 "uuid": "c0256175-8472-410b-b1aa-a7e741eef970", 00:14:30.097 "is_configured": true, 00:14:30.097 "data_offset": 2048, 00:14:30.097 "data_size": 63488 00:14:30.097 } 00:14:30.097 ] 00:14:30.097 }' 00:14:30.097 21:10:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:30.097 21:10:52 -- common/autotest_common.sh@10 -- # set +x 00:14:31.031 21:10:53 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:31.031 21:10:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:31.031 21:10:53 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.031 21:10:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:31.031 21:10:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:31.031 21:10:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:31.031 21:10:53 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:31.289 [2024-06-07 21:10:53.900065] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:31.289 [2024-06-07 21:10:53.900284] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.289 [2024-06-07 21:10:53.900470] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.289 [2024-06-07 21:10:53.911244] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.289 [2024-06-07 21:10:53.911456] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:14:31.289 21:10:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:31.289 21:10:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:31.289 21:10:53 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.289 21:10:53 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:31.548 21:10:54 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:31.548 21:10:54 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:31.548 21:10:54 -- bdev/bdev_raid.sh@287 -- # killprocess 127391 00:14:31.548 21:10:54 -- common/autotest_common.sh@926 -- # '[' -z 127391 ']' 00:14:31.548 21:10:54 -- common/autotest_common.sh@930 -- # kill -0 127391 00:14:31.548 21:10:54 -- common/autotest_common.sh@931 -- # uname 00:14:31.548 21:10:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:31.548 21:10:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127391 00:14:31.548 killing process with pid 127391 00:14:31.548 21:10:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:31.548 21:10:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:31.548 21:10:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127391' 00:14:31.548 21:10:54 -- common/autotest_common.sh@945 -- # kill 127391 00:14:31.548 21:10:54 -- common/autotest_common.sh@950 -- # wait 127391 00:14:31.548 [2024-06-07 21:10:54.144702] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:31.548 [2024-06-07 21:10:54.144792] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:31.807 ************************************ 00:14:31.807 END TEST raid_state_function_test_sb 00:14:31.807 ************************************ 00:14:31.807 00:14:31.807 real 0m10.212s 00:14:31.807 user 0m18.827s 00:14:31.807 sys 0m1.182s 00:14:31.807 21:10:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.807 21:10:54 -- common/autotest_common.sh@10 -- # set +x 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:14:31.807 21:10:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:31.807 21:10:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:31.807 21:10:54 -- common/autotest_common.sh@10 -- # set +x 00:14:31.807 ************************************ 00:14:31.807 START TEST raid_superblock_test 00:14:31.807 ************************************ 00:14:31.807 21:10:54 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@357 -- # raid_pid=127735 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@358 -- # waitforlisten 127735 /var/tmp/spdk-raid.sock 00:14:31.807 21:10:54 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:31.807 21:10:54 -- common/autotest_common.sh@819 -- # '[' -z 127735 ']' 00:14:31.807 21:10:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:31.807 21:10:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:31.807 21:10:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:31.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:31.807 21:10:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:31.807 21:10:54 -- common/autotest_common.sh@10 -- # set +x 00:14:32.066 [2024-06-07 21:10:54.488317] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:32.066 [2024-06-07 21:10:54.489341] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127735 ] 00:14:32.066 [2024-06-07 21:10:54.657157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.066 [2024-06-07 21:10:54.736765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.324 [2024-06-07 21:10:54.794813] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.890 21:10:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:32.890 21:10:55 -- common/autotest_common.sh@852 -- # return 0 00:14:32.890 21:10:55 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:32.890 21:10:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:32.890 21:10:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:32.890 21:10:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:32.890 21:10:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:32.890 21:10:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:32.890 21:10:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:32.890 21:10:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:32.890 21:10:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:33.148 malloc1 00:14:33.148 21:10:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:33.407 [2024-06-07 21:10:55.880453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:33.407 [2024-06-07 21:10:55.880769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.407 [2024-06-07 21:10:55.880850] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:14:33.407 [2024-06-07 21:10:55.881135] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.407 [2024-06-07 21:10:55.883714] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.407 [2024-06-07 21:10:55.883912] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:33.407 pt1 00:14:33.408 21:10:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:33.408 21:10:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:33.408 21:10:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:33.408 21:10:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:33.408 21:10:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:33.408 21:10:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:33.408 21:10:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:33.408 21:10:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:33.408 21:10:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:33.670 malloc2 00:14:33.670 21:10:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:33.929 [2024-06-07 21:10:56.367576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:33.929 [2024-06-07 21:10:56.367850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.929 [2024-06-07 21:10:56.367927] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:14:33.929 [2024-06-07 21:10:56.368085] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.929 [2024-06-07 21:10:56.370258] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.929 [2024-06-07 21:10:56.370436] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:33.929 pt2 00:14:33.929 21:10:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:33.929 21:10:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:33.929 21:10:56 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:33.929 [2024-06-07 21:10:56.579704] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:33.929 [2024-06-07 21:10:56.581784] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:33.929 [2024-06-07 21:10:56.582163] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:14:33.929 [2024-06-07 21:10:56.582285] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:33.929 [2024-06-07 21:10:56.582471] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:33.929 [2024-06-07 21:10:56.583101] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:14:33.929 [2024-06-07 21:10:56.583276] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:14:33.929 [2024-06-07 21:10:56.583573] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.929 21:10:56 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:33.929 21:10:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:33.929 21:10:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:33.929 21:10:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:33.929 21:10:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:33.929 21:10:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:33.929 21:10:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:33.929 21:10:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:33.929 21:10:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:33.929 21:10:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:33.929 21:10:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.929 21:10:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.188 21:10:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:34.188 "name": "raid_bdev1", 00:14:34.188 "uuid": "949e9add-6a39-4d84-a5d4-62accd813194", 00:14:34.188 "strip_size_kb": 0, 00:14:34.188 "state": "online", 00:14:34.188 "raid_level": "raid1", 00:14:34.188 "superblock": true, 00:14:34.188 "num_base_bdevs": 2, 00:14:34.188 "num_base_bdevs_discovered": 2, 00:14:34.188 "num_base_bdevs_operational": 2, 00:14:34.188 "base_bdevs_list": [ 00:14:34.188 { 00:14:34.188 "name": "pt1", 00:14:34.188 "uuid": "52d7d997-7d1e-5dad-aaed-560f6c61b86d", 00:14:34.188 "is_configured": true, 00:14:34.188 "data_offset": 2048, 00:14:34.188 "data_size": 63488 00:14:34.188 }, 00:14:34.188 { 00:14:34.188 "name": "pt2", 00:14:34.188 "uuid": "722739f8-c5a1-5b59-ada5-da29e03cd814", 00:14:34.188 "is_configured": true, 00:14:34.188 "data_offset": 2048, 00:14:34.188 "data_size": 63488 00:14:34.188 } 00:14:34.188 ] 00:14:34.188 }' 00:14:34.188 21:10:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:34.188 21:10:56 -- common/autotest_common.sh@10 -- # set +x 00:14:35.122 21:10:57 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:35.122 21:10:57 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:35.122 [2024-06-07 21:10:57.672092] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.122 21:10:57 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=949e9add-6a39-4d84-a5d4-62accd813194 00:14:35.122 21:10:57 -- bdev/bdev_raid.sh@380 -- # '[' -z 949e9add-6a39-4d84-a5d4-62accd813194 ']' 00:14:35.122 21:10:57 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:35.380 [2024-06-07 21:10:57.923929] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.380 [2024-06-07 21:10:57.924106] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.380 [2024-06-07 21:10:57.924322] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.380 [2024-06-07 21:10:57.924501] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.380 [2024-06-07 21:10:57.924605] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:14:35.380 21:10:57 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.380 21:10:57 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:35.638 21:10:58 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:35.638 21:10:58 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:35.638 21:10:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:35.638 21:10:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:35.897 21:10:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:35.897 21:10:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:36.155 21:10:58 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:36.155 21:10:58 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:36.414 21:10:58 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:36.414 21:10:58 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:36.414 21:10:58 -- common/autotest_common.sh@640 -- # local es=0 00:14:36.414 21:10:58 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:36.414 21:10:58 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.414 21:10:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:36.414 21:10:58 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.414 21:10:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:36.414 21:10:58 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.414 21:10:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:36.414 21:10:58 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.414 21:10:58 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:36.414 21:10:58 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:36.672 [2024-06-07 21:10:59.096167] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:36.672 [2024-06-07 21:10:59.098259] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:36.672 [2024-06-07 21:10:59.098472] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:36.672 [2024-06-07 21:10:59.098694] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:36.672 [2024-06-07 21:10:59.098833] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.672 [2024-06-07 21:10:59.098873] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:14:36.672 request: 00:14:36.672 { 00:14:36.672 "name": "raid_bdev1", 00:14:36.672 "raid_level": "raid1", 00:14:36.672 "base_bdevs": [ 00:14:36.672 "malloc1", 00:14:36.672 "malloc2" 00:14:36.672 ], 00:14:36.672 "superblock": false, 00:14:36.672 "method": "bdev_raid_create", 00:14:36.672 "req_id": 1 00:14:36.672 } 00:14:36.672 Got JSON-RPC error response 00:14:36.672 response: 00:14:36.672 { 00:14:36.672 "code": -17, 00:14:36.672 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:36.672 } 00:14:36.672 21:10:59 -- common/autotest_common.sh@643 -- # es=1 00:14:36.672 21:10:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:36.672 21:10:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:36.672 21:10:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:36.672 21:10:59 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.672 21:10:59 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:36.672 21:10:59 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:36.672 21:10:59 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:36.672 21:10:59 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:36.930 [2024-06-07 21:10:59.580226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:36.930 [2024-06-07 21:10:59.580530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.930 [2024-06-07 21:10:59.580692] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:36.930 [2024-06-07 21:10:59.580814] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.930 [2024-06-07 21:10:59.583422] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.930 [2024-06-07 21:10:59.583596] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:36.930 [2024-06-07 21:10:59.583814] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:36.930 [2024-06-07 21:10:59.584000] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:36.930 pt1 00:14:36.930 21:10:59 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:36.930 21:10:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:36.930 21:10:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:36.930 21:10:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:36.930 21:10:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:36.930 21:10:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:36.930 21:10:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:36.930 21:10:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:36.930 21:10:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:36.930 21:10:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:36.930 21:10:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.930 21:10:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.497 21:10:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.497 "name": "raid_bdev1", 00:14:37.497 "uuid": "949e9add-6a39-4d84-a5d4-62accd813194", 00:14:37.497 "strip_size_kb": 0, 00:14:37.497 "state": "configuring", 00:14:37.497 "raid_level": "raid1", 00:14:37.497 "superblock": true, 00:14:37.497 "num_base_bdevs": 2, 00:14:37.497 "num_base_bdevs_discovered": 1, 00:14:37.497 "num_base_bdevs_operational": 2, 00:14:37.497 "base_bdevs_list": [ 00:14:37.497 { 00:14:37.497 "name": "pt1", 00:14:37.497 "uuid": "52d7d997-7d1e-5dad-aaed-560f6c61b86d", 00:14:37.497 "is_configured": true, 00:14:37.497 "data_offset": 2048, 00:14:37.497 "data_size": 63488 00:14:37.497 }, 00:14:37.497 { 00:14:37.497 "name": null, 00:14:37.497 "uuid": "722739f8-c5a1-5b59-ada5-da29e03cd814", 00:14:37.497 "is_configured": false, 00:14:37.497 "data_offset": 2048, 00:14:37.497 "data_size": 63488 00:14:37.497 } 00:14:37.497 ] 00:14:37.497 }' 00:14:37.497 21:10:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.497 21:10:59 -- common/autotest_common.sh@10 -- # set +x 00:14:38.064 21:11:00 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:38.064 21:11:00 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:38.064 21:11:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:38.064 21:11:00 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:38.064 [2024-06-07 21:11:00.720558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:38.064 [2024-06-07 21:11:00.720885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.064 [2024-06-07 21:11:00.721051] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:38.064 [2024-06-07 21:11:00.721200] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.064 [2024-06-07 21:11:00.721800] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.064 [2024-06-07 21:11:00.721954] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:38.064 [2024-06-07 21:11:00.722155] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:38.064 [2024-06-07 21:11:00.722289] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:38.064 [2024-06-07 21:11:00.722479] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:14:38.064 [2024-06-07 21:11:00.722591] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:38.064 [2024-06-07 21:11:00.722856] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:14:38.064 [2024-06-07 21:11:00.723291] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:14:38.064 [2024-06-07 21:11:00.723433] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:14:38.064 [2024-06-07 21:11:00.723643] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.064 pt2 00:14:38.064 21:11:00 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:38.065 21:11:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:38.065 21:11:00 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:38.065 21:11:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:38.065 21:11:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:38.065 21:11:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:38.065 21:11:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:38.065 21:11:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:38.065 21:11:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:38.065 21:11:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:38.065 21:11:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:38.065 21:11:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:38.065 21:11:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.065 21:11:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.323 21:11:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:38.323 "name": "raid_bdev1", 00:14:38.323 "uuid": "949e9add-6a39-4d84-a5d4-62accd813194", 00:14:38.323 "strip_size_kb": 0, 00:14:38.323 "state": "online", 00:14:38.323 "raid_level": "raid1", 00:14:38.323 "superblock": true, 00:14:38.323 "num_base_bdevs": 2, 00:14:38.323 "num_base_bdevs_discovered": 2, 00:14:38.323 "num_base_bdevs_operational": 2, 00:14:38.323 "base_bdevs_list": [ 00:14:38.323 { 00:14:38.323 "name": "pt1", 00:14:38.323 "uuid": "52d7d997-7d1e-5dad-aaed-560f6c61b86d", 00:14:38.323 "is_configured": true, 00:14:38.323 "data_offset": 2048, 00:14:38.323 "data_size": 63488 00:14:38.323 }, 00:14:38.323 { 00:14:38.323 "name": "pt2", 00:14:38.323 "uuid": "722739f8-c5a1-5b59-ada5-da29e03cd814", 00:14:38.323 "is_configured": true, 00:14:38.323 "data_offset": 2048, 00:14:38.323 "data_size": 63488 00:14:38.323 } 00:14:38.323 ] 00:14:38.323 }' 00:14:38.323 21:11:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:38.323 21:11:00 -- common/autotest_common.sh@10 -- # set +x 00:14:39.257 21:11:01 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:39.257 21:11:01 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:39.257 [2024-06-07 21:11:01.853138] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.257 21:11:01 -- bdev/bdev_raid.sh@430 -- # '[' 949e9add-6a39-4d84-a5d4-62accd813194 '!=' 949e9add-6a39-4d84-a5d4-62accd813194 ']' 00:14:39.257 21:11:01 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:14:39.257 21:11:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:39.257 21:11:01 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:39.257 21:11:01 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:39.515 [2024-06-07 21:11:02.112978] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:39.515 21:11:02 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:39.515 21:11:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:39.516 21:11:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:39.516 21:11:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:39.516 21:11:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:39.516 21:11:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:39.516 21:11:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:39.516 21:11:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:39.516 21:11:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:39.516 21:11:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:39.516 21:11:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.516 21:11:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.774 21:11:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:39.774 "name": "raid_bdev1", 00:14:39.774 "uuid": "949e9add-6a39-4d84-a5d4-62accd813194", 00:14:39.774 "strip_size_kb": 0, 00:14:39.774 "state": "online", 00:14:39.774 "raid_level": "raid1", 00:14:39.774 "superblock": true, 00:14:39.774 "num_base_bdevs": 2, 00:14:39.774 "num_base_bdevs_discovered": 1, 00:14:39.774 "num_base_bdevs_operational": 1, 00:14:39.774 "base_bdevs_list": [ 00:14:39.774 { 00:14:39.774 "name": null, 00:14:39.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.774 "is_configured": false, 00:14:39.774 "data_offset": 2048, 00:14:39.774 "data_size": 63488 00:14:39.774 }, 00:14:39.774 { 00:14:39.774 "name": "pt2", 00:14:39.774 "uuid": "722739f8-c5a1-5b59-ada5-da29e03cd814", 00:14:39.774 "is_configured": true, 00:14:39.774 "data_offset": 2048, 00:14:39.774 "data_size": 63488 00:14:39.774 } 00:14:39.774 ] 00:14:39.774 }' 00:14:39.774 21:11:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:39.774 21:11:02 -- common/autotest_common.sh@10 -- # set +x 00:14:40.340 21:11:02 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:40.602 [2024-06-07 21:11:03.193223] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.602 [2024-06-07 21:11:03.193466] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.602 [2024-06-07 21:11:03.193641] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.602 [2024-06-07 21:11:03.193840] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.602 [2024-06-07 21:11:03.193940] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:14:40.602 21:11:03 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.602 21:11:03 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:14:40.859 21:11:03 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:14:40.859 21:11:03 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:14:40.859 21:11:03 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:14:40.859 21:11:03 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:14:40.859 21:11:03 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:41.116 21:11:03 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:14:41.116 21:11:03 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:14:41.116 21:11:03 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:14:41.116 21:11:03 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:14:41.116 21:11:03 -- bdev/bdev_raid.sh@462 -- # i=1 00:14:41.116 21:11:03 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:41.374 [2024-06-07 21:11:03.897480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:41.374 [2024-06-07 21:11:03.897777] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.374 [2024-06-07 21:11:03.897938] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:41.374 [2024-06-07 21:11:03.898085] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.374 [2024-06-07 21:11:03.900479] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.374 [2024-06-07 21:11:03.900672] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:41.374 [2024-06-07 21:11:03.900862] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:41.374 [2024-06-07 21:11:03.901054] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.374 [2024-06-07 21:11:03.901252] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:14:41.374 [2024-06-07 21:11:03.901362] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:41.374 [2024-06-07 21:11:03.901545] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:14:41.374 [2024-06-07 21:11:03.901959] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:14:41.374 [2024-06-07 21:11:03.902083] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:14:41.374 [2024-06-07 21:11:03.902322] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.374 pt2 00:14:41.374 21:11:03 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:41.374 21:11:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:41.374 21:11:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:41.374 21:11:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:41.374 21:11:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:41.374 21:11:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:41.374 21:11:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:41.374 21:11:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:41.374 21:11:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:41.374 21:11:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:41.374 21:11:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.374 21:11:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.631 21:11:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:41.631 "name": "raid_bdev1", 00:14:41.631 "uuid": "949e9add-6a39-4d84-a5d4-62accd813194", 00:14:41.631 "strip_size_kb": 0, 00:14:41.631 "state": "online", 00:14:41.631 "raid_level": "raid1", 00:14:41.631 "superblock": true, 00:14:41.631 "num_base_bdevs": 2, 00:14:41.631 "num_base_bdevs_discovered": 1, 00:14:41.631 "num_base_bdevs_operational": 1, 00:14:41.631 "base_bdevs_list": [ 00:14:41.631 { 00:14:41.631 "name": null, 00:14:41.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.631 "is_configured": false, 00:14:41.631 "data_offset": 2048, 00:14:41.631 "data_size": 63488 00:14:41.631 }, 00:14:41.631 { 00:14:41.631 "name": "pt2", 00:14:41.631 "uuid": "722739f8-c5a1-5b59-ada5-da29e03cd814", 00:14:41.631 "is_configured": true, 00:14:41.631 "data_offset": 2048, 00:14:41.631 "data_size": 63488 00:14:41.631 } 00:14:41.631 ] 00:14:41.631 }' 00:14:41.631 21:11:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:41.631 21:11:04 -- common/autotest_common.sh@10 -- # set +x 00:14:42.198 21:11:04 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:14:42.198 21:11:04 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:42.198 21:11:04 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:14:42.456 [2024-06-07 21:11:04.982816] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.456 21:11:04 -- bdev/bdev_raid.sh@506 -- # '[' 949e9add-6a39-4d84-a5d4-62accd813194 '!=' 949e9add-6a39-4d84-a5d4-62accd813194 ']' 00:14:42.456 21:11:04 -- bdev/bdev_raid.sh@511 -- # killprocess 127735 00:14:42.456 21:11:04 -- common/autotest_common.sh@926 -- # '[' -z 127735 ']' 00:14:42.456 21:11:04 -- common/autotest_common.sh@930 -- # kill -0 127735 00:14:42.456 21:11:04 -- common/autotest_common.sh@931 -- # uname 00:14:42.456 21:11:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:42.456 21:11:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127735 00:14:42.456 killing process with pid 127735 00:14:42.456 21:11:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:42.456 21:11:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:42.456 21:11:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127735' 00:14:42.456 21:11:05 -- common/autotest_common.sh@945 -- # kill 127735 00:14:42.456 21:11:05 -- common/autotest_common.sh@950 -- # wait 127735 00:14:42.456 [2024-06-07 21:11:05.016113] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.456 [2024-06-07 21:11:05.016257] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.456 [2024-06-07 21:11:05.016357] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.456 [2024-06-07 21:11:05.016470] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:14:42.456 [2024-06-07 21:11:05.036660] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:42.715 ************************************ 00:14:42.715 END TEST raid_superblock_test 00:14:42.715 ************************************ 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:42.715 00:14:42.715 real 0m10.836s 00:14:42.715 user 0m20.253s 00:14:42.715 sys 0m1.290s 00:14:42.715 21:11:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.715 21:11:05 -- common/autotest_common.sh@10 -- # set +x 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:14:42.715 21:11:05 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:42.715 21:11:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:42.715 21:11:05 -- common/autotest_common.sh@10 -- # set +x 00:14:42.715 ************************************ 00:14:42.715 START TEST raid_state_function_test 00:14:42.715 ************************************ 00:14:42.715 21:11:05 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@226 -- # raid_pid=128096 00:14:42.715 21:11:05 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:42.716 Process raid pid: 128096 00:14:42.716 21:11:05 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128096' 00:14:42.716 21:11:05 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128096 /var/tmp/spdk-raid.sock 00:14:42.716 21:11:05 -- common/autotest_common.sh@819 -- # '[' -z 128096 ']' 00:14:42.716 21:11:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:42.716 21:11:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:42.716 21:11:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:42.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:42.716 21:11:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:42.716 21:11:05 -- common/autotest_common.sh@10 -- # set +x 00:14:42.716 [2024-06-07 21:11:05.386607] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:42.716 [2024-06-07 21:11:05.387077] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.974 [2024-06-07 21:11:05.556161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.974 [2024-06-07 21:11:05.634568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.233 [2024-06-07 21:11:05.691805] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.799 21:11:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:43.799 21:11:06 -- common/autotest_common.sh@852 -- # return 0 00:14:43.799 21:11:06 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:43.799 [2024-06-07 21:11:06.459545] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:43.799 [2024-06-07 21:11:06.459814] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:43.799 [2024-06-07 21:11:06.459942] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:43.799 [2024-06-07 21:11:06.460002] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:43.799 [2024-06-07 21:11:06.460192] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:43.799 [2024-06-07 21:11:06.460272] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:44.057 21:11:06 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:44.057 21:11:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:44.057 21:11:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:44.057 21:11:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:44.057 21:11:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:44.057 21:11:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:44.057 21:11:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:44.057 21:11:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:44.057 21:11:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:44.057 21:11:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:44.057 21:11:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.057 21:11:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.057 21:11:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:44.057 "name": "Existed_Raid", 00:14:44.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.057 "strip_size_kb": 64, 00:14:44.057 "state": "configuring", 00:14:44.057 "raid_level": "raid0", 00:14:44.057 "superblock": false, 00:14:44.057 "num_base_bdevs": 3, 00:14:44.057 "num_base_bdevs_discovered": 0, 00:14:44.057 "num_base_bdevs_operational": 3, 00:14:44.057 "base_bdevs_list": [ 00:14:44.057 { 00:14:44.057 "name": "BaseBdev1", 00:14:44.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.057 "is_configured": false, 00:14:44.057 "data_offset": 0, 00:14:44.057 "data_size": 0 00:14:44.057 }, 00:14:44.057 { 00:14:44.057 "name": "BaseBdev2", 00:14:44.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.057 "is_configured": false, 00:14:44.057 "data_offset": 0, 00:14:44.057 "data_size": 0 00:14:44.057 }, 00:14:44.057 { 00:14:44.057 "name": "BaseBdev3", 00:14:44.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.057 "is_configured": false, 00:14:44.057 "data_offset": 0, 00:14:44.057 "data_size": 0 00:14:44.057 } 00:14:44.057 ] 00:14:44.057 }' 00:14:44.057 21:11:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:44.057 21:11:06 -- common/autotest_common.sh@10 -- # set +x 00:14:44.992 21:11:07 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:44.992 [2024-06-07 21:11:07.583684] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.992 [2024-06-07 21:11:07.584039] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:44.992 21:11:07 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:45.252 [2024-06-07 21:11:07.835772] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.253 [2024-06-07 21:11:07.836142] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.253 [2024-06-07 21:11:07.836246] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.253 [2024-06-07 21:11:07.836301] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.253 [2024-06-07 21:11:07.836394] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.253 [2024-06-07 21:11:07.836460] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.253 21:11:07 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.511 [2024-06-07 21:11:08.059245] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.511 BaseBdev1 00:14:45.511 21:11:08 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:45.511 21:11:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:45.511 21:11:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:45.511 21:11:08 -- common/autotest_common.sh@889 -- # local i 00:14:45.511 21:11:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:45.511 21:11:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:45.511 21:11:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:45.769 21:11:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:46.026 [ 00:14:46.026 { 00:14:46.027 "name": "BaseBdev1", 00:14:46.027 "aliases": [ 00:14:46.027 "8dc07fc3-bad9-4bf3-a8f7-617da4a367f7" 00:14:46.027 ], 00:14:46.027 "product_name": "Malloc disk", 00:14:46.027 "block_size": 512, 00:14:46.027 "num_blocks": 65536, 00:14:46.027 "uuid": "8dc07fc3-bad9-4bf3-a8f7-617da4a367f7", 00:14:46.027 "assigned_rate_limits": { 00:14:46.027 "rw_ios_per_sec": 0, 00:14:46.027 "rw_mbytes_per_sec": 0, 00:14:46.027 "r_mbytes_per_sec": 0, 00:14:46.027 "w_mbytes_per_sec": 0 00:14:46.027 }, 00:14:46.027 "claimed": true, 00:14:46.027 "claim_type": "exclusive_write", 00:14:46.027 "zoned": false, 00:14:46.027 "supported_io_types": { 00:14:46.027 "read": true, 00:14:46.027 "write": true, 00:14:46.027 "unmap": true, 00:14:46.027 "write_zeroes": true, 00:14:46.027 "flush": true, 00:14:46.027 "reset": true, 00:14:46.027 "compare": false, 00:14:46.027 "compare_and_write": false, 00:14:46.027 "abort": true, 00:14:46.027 "nvme_admin": false, 00:14:46.027 "nvme_io": false 00:14:46.027 }, 00:14:46.027 "memory_domains": [ 00:14:46.027 { 00:14:46.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.027 "dma_device_type": 2 00:14:46.027 } 00:14:46.027 ], 00:14:46.027 "driver_specific": {} 00:14:46.027 } 00:14:46.027 ] 00:14:46.027 21:11:08 -- common/autotest_common.sh@895 -- # return 0 00:14:46.027 21:11:08 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:46.027 21:11:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:46.027 21:11:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:46.027 21:11:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:46.027 21:11:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:46.027 21:11:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:46.027 21:11:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:46.027 21:11:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:46.027 21:11:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:46.027 21:11:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:46.027 21:11:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.027 21:11:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.285 21:11:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:46.285 "name": "Existed_Raid", 00:14:46.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.285 "strip_size_kb": 64, 00:14:46.285 "state": "configuring", 00:14:46.285 "raid_level": "raid0", 00:14:46.285 "superblock": false, 00:14:46.285 "num_base_bdevs": 3, 00:14:46.285 "num_base_bdevs_discovered": 1, 00:14:46.285 "num_base_bdevs_operational": 3, 00:14:46.285 "base_bdevs_list": [ 00:14:46.285 { 00:14:46.285 "name": "BaseBdev1", 00:14:46.285 "uuid": "8dc07fc3-bad9-4bf3-a8f7-617da4a367f7", 00:14:46.285 "is_configured": true, 00:14:46.285 "data_offset": 0, 00:14:46.285 "data_size": 65536 00:14:46.285 }, 00:14:46.285 { 00:14:46.285 "name": "BaseBdev2", 00:14:46.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.285 "is_configured": false, 00:14:46.285 "data_offset": 0, 00:14:46.285 "data_size": 0 00:14:46.285 }, 00:14:46.285 { 00:14:46.285 "name": "BaseBdev3", 00:14:46.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.285 "is_configured": false, 00:14:46.285 "data_offset": 0, 00:14:46.285 "data_size": 0 00:14:46.285 } 00:14:46.285 ] 00:14:46.285 }' 00:14:46.285 21:11:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:46.285 21:11:08 -- common/autotest_common.sh@10 -- # set +x 00:14:46.850 21:11:09 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:47.108 [2024-06-07 21:11:09.643675] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:47.108 [2024-06-07 21:11:09.643924] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:47.108 21:11:09 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:47.108 21:11:09 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:47.367 [2024-06-07 21:11:09.907766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.367 [2024-06-07 21:11:09.910093] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.367 [2024-06-07 21:11:09.910304] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.367 [2024-06-07 21:11:09.910403] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:47.367 [2024-06-07 21:11:09.910565] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:47.367 21:11:09 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:47.367 21:11:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:47.367 21:11:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:47.367 21:11:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:47.367 21:11:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:47.367 21:11:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:47.367 21:11:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:47.367 21:11:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:47.367 21:11:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:47.367 21:11:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:47.367 21:11:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:47.367 21:11:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:47.367 21:11:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.367 21:11:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.625 21:11:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:47.625 "name": "Existed_Raid", 00:14:47.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.625 "strip_size_kb": 64, 00:14:47.625 "state": "configuring", 00:14:47.625 "raid_level": "raid0", 00:14:47.625 "superblock": false, 00:14:47.625 "num_base_bdevs": 3, 00:14:47.625 "num_base_bdevs_discovered": 1, 00:14:47.625 "num_base_bdevs_operational": 3, 00:14:47.625 "base_bdevs_list": [ 00:14:47.625 { 00:14:47.625 "name": "BaseBdev1", 00:14:47.625 "uuid": "8dc07fc3-bad9-4bf3-a8f7-617da4a367f7", 00:14:47.625 "is_configured": true, 00:14:47.625 "data_offset": 0, 00:14:47.625 "data_size": 65536 00:14:47.625 }, 00:14:47.625 { 00:14:47.625 "name": "BaseBdev2", 00:14:47.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.625 "is_configured": false, 00:14:47.625 "data_offset": 0, 00:14:47.625 "data_size": 0 00:14:47.625 }, 00:14:47.625 { 00:14:47.625 "name": "BaseBdev3", 00:14:47.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.625 "is_configured": false, 00:14:47.625 "data_offset": 0, 00:14:47.625 "data_size": 0 00:14:47.625 } 00:14:47.625 ] 00:14:47.625 }' 00:14:47.625 21:11:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:47.625 21:11:10 -- common/autotest_common.sh@10 -- # set +x 00:14:48.579 21:11:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.579 [2024-06-07 21:11:11.096923] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.579 BaseBdev2 00:14:48.579 21:11:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:48.579 21:11:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:48.579 21:11:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:48.579 21:11:11 -- common/autotest_common.sh@889 -- # local i 00:14:48.579 21:11:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:48.579 21:11:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:48.579 21:11:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:48.837 21:11:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:49.096 [ 00:14:49.096 { 00:14:49.096 "name": "BaseBdev2", 00:14:49.096 "aliases": [ 00:14:49.096 "03cd71b3-b2fb-4de6-b790-155164d8b944" 00:14:49.096 ], 00:14:49.096 "product_name": "Malloc disk", 00:14:49.096 "block_size": 512, 00:14:49.096 "num_blocks": 65536, 00:14:49.096 "uuid": "03cd71b3-b2fb-4de6-b790-155164d8b944", 00:14:49.096 "assigned_rate_limits": { 00:14:49.096 "rw_ios_per_sec": 0, 00:14:49.096 "rw_mbytes_per_sec": 0, 00:14:49.096 "r_mbytes_per_sec": 0, 00:14:49.096 "w_mbytes_per_sec": 0 00:14:49.096 }, 00:14:49.096 "claimed": true, 00:14:49.096 "claim_type": "exclusive_write", 00:14:49.096 "zoned": false, 00:14:49.096 "supported_io_types": { 00:14:49.096 "read": true, 00:14:49.096 "write": true, 00:14:49.096 "unmap": true, 00:14:49.096 "write_zeroes": true, 00:14:49.096 "flush": true, 00:14:49.096 "reset": true, 00:14:49.096 "compare": false, 00:14:49.096 "compare_and_write": false, 00:14:49.096 "abort": true, 00:14:49.096 "nvme_admin": false, 00:14:49.096 "nvme_io": false 00:14:49.096 }, 00:14:49.096 "memory_domains": [ 00:14:49.096 { 00:14:49.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.096 "dma_device_type": 2 00:14:49.096 } 00:14:49.096 ], 00:14:49.096 "driver_specific": {} 00:14:49.096 } 00:14:49.096 ] 00:14:49.096 21:11:11 -- common/autotest_common.sh@895 -- # return 0 00:14:49.096 21:11:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:49.096 21:11:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:49.096 21:11:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:49.096 21:11:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:49.096 21:11:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:49.096 21:11:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:49.096 21:11:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:49.096 21:11:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:49.096 21:11:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:49.096 21:11:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:49.096 21:11:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:49.096 21:11:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:49.096 21:11:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.096 21:11:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.355 21:11:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:49.355 "name": "Existed_Raid", 00:14:49.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.355 "strip_size_kb": 64, 00:14:49.355 "state": "configuring", 00:14:49.355 "raid_level": "raid0", 00:14:49.355 "superblock": false, 00:14:49.355 "num_base_bdevs": 3, 00:14:49.355 "num_base_bdevs_discovered": 2, 00:14:49.355 "num_base_bdevs_operational": 3, 00:14:49.355 "base_bdevs_list": [ 00:14:49.355 { 00:14:49.355 "name": "BaseBdev1", 00:14:49.355 "uuid": "8dc07fc3-bad9-4bf3-a8f7-617da4a367f7", 00:14:49.355 "is_configured": true, 00:14:49.355 "data_offset": 0, 00:14:49.355 "data_size": 65536 00:14:49.355 }, 00:14:49.355 { 00:14:49.355 "name": "BaseBdev2", 00:14:49.355 "uuid": "03cd71b3-b2fb-4de6-b790-155164d8b944", 00:14:49.355 "is_configured": true, 00:14:49.355 "data_offset": 0, 00:14:49.355 "data_size": 65536 00:14:49.355 }, 00:14:49.355 { 00:14:49.355 "name": "BaseBdev3", 00:14:49.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.355 "is_configured": false, 00:14:49.355 "data_offset": 0, 00:14:49.355 "data_size": 0 00:14:49.355 } 00:14:49.355 ] 00:14:49.355 }' 00:14:49.355 21:11:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:49.355 21:11:11 -- common/autotest_common.sh@10 -- # set +x 00:14:49.922 21:11:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:50.181 [2024-06-07 21:11:12.750284] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.181 [2024-06-07 21:11:12.750717] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:50.181 [2024-06-07 21:11:12.750764] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:50.181 [2024-06-07 21:11:12.751028] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:50.181 [2024-06-07 21:11:12.751650] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:50.181 [2024-06-07 21:11:12.751787] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:14:50.181 [2024-06-07 21:11:12.752136] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.181 BaseBdev3 00:14:50.181 21:11:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:14:50.181 21:11:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:14:50.181 21:11:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:50.181 21:11:12 -- common/autotest_common.sh@889 -- # local i 00:14:50.181 21:11:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:50.181 21:11:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:50.181 21:11:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:50.440 21:11:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:50.698 [ 00:14:50.698 { 00:14:50.698 "name": "BaseBdev3", 00:14:50.698 "aliases": [ 00:14:50.698 "c7fcb252-74d0-4006-9ea6-88d29620945f" 00:14:50.698 ], 00:14:50.698 "product_name": "Malloc disk", 00:14:50.698 "block_size": 512, 00:14:50.698 "num_blocks": 65536, 00:14:50.698 "uuid": "c7fcb252-74d0-4006-9ea6-88d29620945f", 00:14:50.698 "assigned_rate_limits": { 00:14:50.698 "rw_ios_per_sec": 0, 00:14:50.698 "rw_mbytes_per_sec": 0, 00:14:50.698 "r_mbytes_per_sec": 0, 00:14:50.698 "w_mbytes_per_sec": 0 00:14:50.698 }, 00:14:50.698 "claimed": true, 00:14:50.698 "claim_type": "exclusive_write", 00:14:50.698 "zoned": false, 00:14:50.698 "supported_io_types": { 00:14:50.698 "read": true, 00:14:50.698 "write": true, 00:14:50.698 "unmap": true, 00:14:50.698 "write_zeroes": true, 00:14:50.698 "flush": true, 00:14:50.698 "reset": true, 00:14:50.698 "compare": false, 00:14:50.698 "compare_and_write": false, 00:14:50.698 "abort": true, 00:14:50.698 "nvme_admin": false, 00:14:50.698 "nvme_io": false 00:14:50.698 }, 00:14:50.698 "memory_domains": [ 00:14:50.698 { 00:14:50.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.698 "dma_device_type": 2 00:14:50.698 } 00:14:50.698 ], 00:14:50.698 "driver_specific": {} 00:14:50.698 } 00:14:50.698 ] 00:14:50.698 21:11:13 -- common/autotest_common.sh@895 -- # return 0 00:14:50.698 21:11:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:50.698 21:11:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:50.698 21:11:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:50.698 21:11:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:50.698 21:11:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:50.698 21:11:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:50.698 21:11:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:50.698 21:11:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:50.698 21:11:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:50.698 21:11:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:50.698 21:11:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:50.698 21:11:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:50.698 21:11:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.698 21:11:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.972 21:11:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:50.972 "name": "Existed_Raid", 00:14:50.972 "uuid": "9bdec72c-adb7-4458-a78c-a38457b1f24a", 00:14:50.972 "strip_size_kb": 64, 00:14:50.972 "state": "online", 00:14:50.972 "raid_level": "raid0", 00:14:50.972 "superblock": false, 00:14:50.972 "num_base_bdevs": 3, 00:14:50.972 "num_base_bdevs_discovered": 3, 00:14:50.972 "num_base_bdevs_operational": 3, 00:14:50.972 "base_bdevs_list": [ 00:14:50.972 { 00:14:50.972 "name": "BaseBdev1", 00:14:50.972 "uuid": "8dc07fc3-bad9-4bf3-a8f7-617da4a367f7", 00:14:50.972 "is_configured": true, 00:14:50.972 "data_offset": 0, 00:14:50.972 "data_size": 65536 00:14:50.972 }, 00:14:50.972 { 00:14:50.972 "name": "BaseBdev2", 00:14:50.972 "uuid": "03cd71b3-b2fb-4de6-b790-155164d8b944", 00:14:50.972 "is_configured": true, 00:14:50.972 "data_offset": 0, 00:14:50.972 "data_size": 65536 00:14:50.972 }, 00:14:50.972 { 00:14:50.972 "name": "BaseBdev3", 00:14:50.972 "uuid": "c7fcb252-74d0-4006-9ea6-88d29620945f", 00:14:50.972 "is_configured": true, 00:14:50.972 "data_offset": 0, 00:14:50.972 "data_size": 65536 00:14:50.972 } 00:14:50.972 ] 00:14:50.972 }' 00:14:50.972 21:11:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:50.972 21:11:13 -- common/autotest_common.sh@10 -- # set +x 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:51.907 [2024-06-07 21:11:14.409625] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.907 [2024-06-07 21:11:14.409832] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.907 [2024-06-07 21:11:14.410080] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.907 21:11:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.166 21:11:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:52.166 "name": "Existed_Raid", 00:14:52.166 "uuid": "9bdec72c-adb7-4458-a78c-a38457b1f24a", 00:14:52.166 "strip_size_kb": 64, 00:14:52.166 "state": "offline", 00:14:52.166 "raid_level": "raid0", 00:14:52.166 "superblock": false, 00:14:52.166 "num_base_bdevs": 3, 00:14:52.166 "num_base_bdevs_discovered": 2, 00:14:52.166 "num_base_bdevs_operational": 2, 00:14:52.166 "base_bdevs_list": [ 00:14:52.166 { 00:14:52.166 "name": null, 00:14:52.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.166 "is_configured": false, 00:14:52.166 "data_offset": 0, 00:14:52.166 "data_size": 65536 00:14:52.166 }, 00:14:52.166 { 00:14:52.166 "name": "BaseBdev2", 00:14:52.166 "uuid": "03cd71b3-b2fb-4de6-b790-155164d8b944", 00:14:52.166 "is_configured": true, 00:14:52.166 "data_offset": 0, 00:14:52.166 "data_size": 65536 00:14:52.166 }, 00:14:52.166 { 00:14:52.166 "name": "BaseBdev3", 00:14:52.166 "uuid": "c7fcb252-74d0-4006-9ea6-88d29620945f", 00:14:52.166 "is_configured": true, 00:14:52.166 "data_offset": 0, 00:14:52.166 "data_size": 65536 00:14:52.166 } 00:14:52.166 ] 00:14:52.166 }' 00:14:52.166 21:11:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:52.166 21:11:14 -- common/autotest_common.sh@10 -- # set +x 00:14:52.732 21:11:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:52.732 21:11:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:52.732 21:11:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.732 21:11:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:52.991 21:11:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:52.991 21:11:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:52.991 21:11:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:53.249 [2024-06-07 21:11:15.782309] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:53.249 21:11:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:53.249 21:11:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:53.249 21:11:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.249 21:11:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:53.507 21:11:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:53.507 21:11:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:53.507 21:11:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:53.765 [2024-06-07 21:11:16.309219] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:53.765 [2024-06-07 21:11:16.309525] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:14:53.765 21:11:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:53.765 21:11:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:53.765 21:11:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.765 21:11:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:54.024 21:11:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:54.024 21:11:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:54.024 21:11:16 -- bdev/bdev_raid.sh@287 -- # killprocess 128096 00:14:54.024 21:11:16 -- common/autotest_common.sh@926 -- # '[' -z 128096 ']' 00:14:54.024 21:11:16 -- common/autotest_common.sh@930 -- # kill -0 128096 00:14:54.024 21:11:16 -- common/autotest_common.sh@931 -- # uname 00:14:54.024 21:11:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:54.024 21:11:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128096 00:14:54.024 killing process with pid 128096 00:14:54.024 21:11:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:54.024 21:11:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:54.024 21:11:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128096' 00:14:54.024 21:11:16 -- common/autotest_common.sh@945 -- # kill 128096 00:14:54.024 21:11:16 -- common/autotest_common.sh@950 -- # wait 128096 00:14:54.024 [2024-06-07 21:11:16.602183] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:54.024 [2024-06-07 21:11:16.602329] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:54.283 ************************************ 00:14:54.283 00:14:54.283 real 0m11.517s 00:14:54.283 user 0m21.317s 00:14:54.283 sys 0m1.407s 00:14:54.283 21:11:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.283 21:11:16 -- common/autotest_common.sh@10 -- # set +x 00:14:54.283 END TEST raid_state_function_test 00:14:54.283 ************************************ 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:14:54.283 21:11:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:54.283 21:11:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:54.283 21:11:16 -- common/autotest_common.sh@10 -- # set +x 00:14:54.283 ************************************ 00:14:54.283 START TEST raid_state_function_test_sb 00:14:54.283 ************************************ 00:14:54.283 21:11:16 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=128491 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128491' 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:54.283 Process raid pid: 128491 00:14:54.283 21:11:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128491 /var/tmp/spdk-raid.sock 00:14:54.283 21:11:16 -- common/autotest_common.sh@819 -- # '[' -z 128491 ']' 00:14:54.283 21:11:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:54.283 21:11:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:54.283 21:11:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:54.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:54.283 21:11:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:54.283 21:11:16 -- common/autotest_common.sh@10 -- # set +x 00:14:54.283 [2024-06-07 21:11:16.948626] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:54.283 [2024-06-07 21:11:16.949052] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.542 [2024-06-07 21:11:17.106310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.542 [2024-06-07 21:11:17.174004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.800 [2024-06-07 21:11:17.227984] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.367 21:11:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:55.367 21:11:17 -- common/autotest_common.sh@852 -- # return 0 00:14:55.367 21:11:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:55.626 [2024-06-07 21:11:18.085917] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.626 [2024-06-07 21:11:18.086183] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.626 [2024-06-07 21:11:18.086292] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.626 [2024-06-07 21:11:18.086349] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.626 [2024-06-07 21:11:18.086446] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:55.626 [2024-06-07 21:11:18.086525] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:55.626 21:11:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:55.626 21:11:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:55.626 21:11:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:55.626 21:11:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:55.626 21:11:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:55.626 21:11:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:55.626 21:11:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:55.626 21:11:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:55.626 21:11:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:55.626 21:11:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:55.626 21:11:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.626 21:11:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.884 21:11:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:55.884 "name": "Existed_Raid", 00:14:55.884 "uuid": "dcaeac8c-8adc-477b-8cf8-57339d2eb988", 00:14:55.884 "strip_size_kb": 64, 00:14:55.884 "state": "configuring", 00:14:55.884 "raid_level": "raid0", 00:14:55.884 "superblock": true, 00:14:55.884 "num_base_bdevs": 3, 00:14:55.884 "num_base_bdevs_discovered": 0, 00:14:55.884 "num_base_bdevs_operational": 3, 00:14:55.884 "base_bdevs_list": [ 00:14:55.884 { 00:14:55.884 "name": "BaseBdev1", 00:14:55.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.884 "is_configured": false, 00:14:55.884 "data_offset": 0, 00:14:55.884 "data_size": 0 00:14:55.884 }, 00:14:55.884 { 00:14:55.884 "name": "BaseBdev2", 00:14:55.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.884 "is_configured": false, 00:14:55.884 "data_offset": 0, 00:14:55.884 "data_size": 0 00:14:55.884 }, 00:14:55.884 { 00:14:55.884 "name": "BaseBdev3", 00:14:55.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.884 "is_configured": false, 00:14:55.884 "data_offset": 0, 00:14:55.884 "data_size": 0 00:14:55.884 } 00:14:55.884 ] 00:14:55.884 }' 00:14:55.884 21:11:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:55.884 21:11:18 -- common/autotest_common.sh@10 -- # set +x 00:14:56.451 21:11:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:56.710 [2024-06-07 21:11:19.306028] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.710 [2024-06-07 21:11:19.306217] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:56.710 21:11:19 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:56.968 [2024-06-07 21:11:19.518157] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:56.968 [2024-06-07 21:11:19.518429] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:56.968 [2024-06-07 21:11:19.518533] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.968 [2024-06-07 21:11:19.518715] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.968 [2024-06-07 21:11:19.518817] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:56.968 [2024-06-07 21:11:19.518888] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:56.968 21:11:19 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:57.228 [2024-06-07 21:11:19.757544] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.228 BaseBdev1 00:14:57.228 21:11:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:57.228 21:11:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:57.228 21:11:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:57.228 21:11:19 -- common/autotest_common.sh@889 -- # local i 00:14:57.228 21:11:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:57.228 21:11:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:57.228 21:11:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:57.489 21:11:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:57.748 [ 00:14:57.748 { 00:14:57.748 "name": "BaseBdev1", 00:14:57.748 "aliases": [ 00:14:57.748 "ecbe808f-99e7-468b-8fdd-78e6a1cbe8b4" 00:14:57.748 ], 00:14:57.748 "product_name": "Malloc disk", 00:14:57.748 "block_size": 512, 00:14:57.748 "num_blocks": 65536, 00:14:57.748 "uuid": "ecbe808f-99e7-468b-8fdd-78e6a1cbe8b4", 00:14:57.748 "assigned_rate_limits": { 00:14:57.748 "rw_ios_per_sec": 0, 00:14:57.748 "rw_mbytes_per_sec": 0, 00:14:57.748 "r_mbytes_per_sec": 0, 00:14:57.748 "w_mbytes_per_sec": 0 00:14:57.748 }, 00:14:57.748 "claimed": true, 00:14:57.748 "claim_type": "exclusive_write", 00:14:57.748 "zoned": false, 00:14:57.748 "supported_io_types": { 00:14:57.748 "read": true, 00:14:57.748 "write": true, 00:14:57.748 "unmap": true, 00:14:57.748 "write_zeroes": true, 00:14:57.748 "flush": true, 00:14:57.748 "reset": true, 00:14:57.748 "compare": false, 00:14:57.748 "compare_and_write": false, 00:14:57.748 "abort": true, 00:14:57.748 "nvme_admin": false, 00:14:57.748 "nvme_io": false 00:14:57.748 }, 00:14:57.748 "memory_domains": [ 00:14:57.748 { 00:14:57.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.748 "dma_device_type": 2 00:14:57.748 } 00:14:57.748 ], 00:14:57.748 "driver_specific": {} 00:14:57.748 } 00:14:57.748 ] 00:14:57.748 21:11:20 -- common/autotest_common.sh@895 -- # return 0 00:14:57.748 21:11:20 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:57.748 21:11:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:57.748 21:11:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:57.748 21:11:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:57.748 21:11:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:57.748 21:11:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:57.748 21:11:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.748 21:11:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.748 21:11:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.748 21:11:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.748 21:11:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.748 21:11:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.007 21:11:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:58.007 "name": "Existed_Raid", 00:14:58.007 "uuid": "7e73cb90-085e-4493-85a2-b076c907f3cf", 00:14:58.007 "strip_size_kb": 64, 00:14:58.007 "state": "configuring", 00:14:58.007 "raid_level": "raid0", 00:14:58.007 "superblock": true, 00:14:58.007 "num_base_bdevs": 3, 00:14:58.007 "num_base_bdevs_discovered": 1, 00:14:58.007 "num_base_bdevs_operational": 3, 00:14:58.007 "base_bdevs_list": [ 00:14:58.007 { 00:14:58.007 "name": "BaseBdev1", 00:14:58.007 "uuid": "ecbe808f-99e7-468b-8fdd-78e6a1cbe8b4", 00:14:58.007 "is_configured": true, 00:14:58.007 "data_offset": 2048, 00:14:58.007 "data_size": 63488 00:14:58.007 }, 00:14:58.007 { 00:14:58.007 "name": "BaseBdev2", 00:14:58.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.007 "is_configured": false, 00:14:58.007 "data_offset": 0, 00:14:58.007 "data_size": 0 00:14:58.007 }, 00:14:58.007 { 00:14:58.007 "name": "BaseBdev3", 00:14:58.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.007 "is_configured": false, 00:14:58.007 "data_offset": 0, 00:14:58.007 "data_size": 0 00:14:58.007 } 00:14:58.007 ] 00:14:58.007 }' 00:14:58.007 21:11:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:58.007 21:11:20 -- common/autotest_common.sh@10 -- # set +x 00:14:58.574 21:11:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:58.832 [2024-06-07 21:11:21.474018] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:58.832 [2024-06-07 21:11:21.474274] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:58.832 21:11:21 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:58.832 21:11:21 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:59.091 21:11:21 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:59.350 BaseBdev1 00:14:59.350 21:11:21 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:59.350 21:11:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:59.350 21:11:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:59.350 21:11:21 -- common/autotest_common.sh@889 -- # local i 00:14:59.350 21:11:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:59.350 21:11:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:59.350 21:11:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:59.608 21:11:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:59.867 [ 00:14:59.867 { 00:14:59.867 "name": "BaseBdev1", 00:14:59.867 "aliases": [ 00:14:59.867 "65f52609-eaa8-4ebd-9f4f-d214ef6e2c84" 00:14:59.867 ], 00:14:59.867 "product_name": "Malloc disk", 00:14:59.867 "block_size": 512, 00:14:59.867 "num_blocks": 65536, 00:14:59.867 "uuid": "65f52609-eaa8-4ebd-9f4f-d214ef6e2c84", 00:14:59.867 "assigned_rate_limits": { 00:14:59.867 "rw_ios_per_sec": 0, 00:14:59.867 "rw_mbytes_per_sec": 0, 00:14:59.867 "r_mbytes_per_sec": 0, 00:14:59.867 "w_mbytes_per_sec": 0 00:14:59.867 }, 00:14:59.867 "claimed": false, 00:14:59.867 "zoned": false, 00:14:59.867 "supported_io_types": { 00:14:59.867 "read": true, 00:14:59.867 "write": true, 00:14:59.867 "unmap": true, 00:14:59.867 "write_zeroes": true, 00:14:59.867 "flush": true, 00:14:59.867 "reset": true, 00:14:59.867 "compare": false, 00:14:59.867 "compare_and_write": false, 00:14:59.867 "abort": true, 00:14:59.867 "nvme_admin": false, 00:14:59.867 "nvme_io": false 00:14:59.867 }, 00:14:59.867 "memory_domains": [ 00:14:59.867 { 00:14:59.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.867 "dma_device_type": 2 00:14:59.867 } 00:14:59.867 ], 00:14:59.867 "driver_specific": {} 00:14:59.867 } 00:14:59.867 ] 00:14:59.867 21:11:22 -- common/autotest_common.sh@895 -- # return 0 00:14:59.867 21:11:22 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:00.126 [2024-06-07 21:11:22.562879] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.126 [2024-06-07 21:11:22.565028] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.126 [2024-06-07 21:11:22.565228] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.126 [2024-06-07 21:11:22.565326] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:00.126 [2024-06-07 21:11:22.565405] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:00.126 "name": "Existed_Raid", 00:15:00.126 "uuid": "2591c48b-3ad1-482e-892b-a6e532ddcd94", 00:15:00.126 "strip_size_kb": 64, 00:15:00.126 "state": "configuring", 00:15:00.126 "raid_level": "raid0", 00:15:00.126 "superblock": true, 00:15:00.126 "num_base_bdevs": 3, 00:15:00.126 "num_base_bdevs_discovered": 1, 00:15:00.126 "num_base_bdevs_operational": 3, 00:15:00.126 "base_bdevs_list": [ 00:15:00.126 { 00:15:00.126 "name": "BaseBdev1", 00:15:00.126 "uuid": "65f52609-eaa8-4ebd-9f4f-d214ef6e2c84", 00:15:00.126 "is_configured": true, 00:15:00.126 "data_offset": 2048, 00:15:00.126 "data_size": 63488 00:15:00.126 }, 00:15:00.126 { 00:15:00.126 "name": "BaseBdev2", 00:15:00.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.126 "is_configured": false, 00:15:00.126 "data_offset": 0, 00:15:00.126 "data_size": 0 00:15:00.126 }, 00:15:00.126 { 00:15:00.126 "name": "BaseBdev3", 00:15:00.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.126 "is_configured": false, 00:15:00.126 "data_offset": 0, 00:15:00.126 "data_size": 0 00:15:00.126 } 00:15:00.126 ] 00:15:00.126 }' 00:15:00.126 21:11:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:00.126 21:11:22 -- common/autotest_common.sh@10 -- # set +x 00:15:01.061 21:11:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:01.061 [2024-06-07 21:11:23.688757] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.061 BaseBdev2 00:15:01.061 21:11:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:01.061 21:11:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:01.061 21:11:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:01.061 21:11:23 -- common/autotest_common.sh@889 -- # local i 00:15:01.061 21:11:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:01.061 21:11:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:01.061 21:11:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:01.320 21:11:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:01.579 [ 00:15:01.579 { 00:15:01.579 "name": "BaseBdev2", 00:15:01.579 "aliases": [ 00:15:01.579 "1225e498-5587-4543-a8d1-1795fb1634d5" 00:15:01.579 ], 00:15:01.579 "product_name": "Malloc disk", 00:15:01.579 "block_size": 512, 00:15:01.579 "num_blocks": 65536, 00:15:01.579 "uuid": "1225e498-5587-4543-a8d1-1795fb1634d5", 00:15:01.579 "assigned_rate_limits": { 00:15:01.579 "rw_ios_per_sec": 0, 00:15:01.579 "rw_mbytes_per_sec": 0, 00:15:01.579 "r_mbytes_per_sec": 0, 00:15:01.579 "w_mbytes_per_sec": 0 00:15:01.579 }, 00:15:01.579 "claimed": true, 00:15:01.579 "claim_type": "exclusive_write", 00:15:01.579 "zoned": false, 00:15:01.579 "supported_io_types": { 00:15:01.579 "read": true, 00:15:01.579 "write": true, 00:15:01.579 "unmap": true, 00:15:01.579 "write_zeroes": true, 00:15:01.579 "flush": true, 00:15:01.579 "reset": true, 00:15:01.579 "compare": false, 00:15:01.579 "compare_and_write": false, 00:15:01.579 "abort": true, 00:15:01.579 "nvme_admin": false, 00:15:01.579 "nvme_io": false 00:15:01.579 }, 00:15:01.579 "memory_domains": [ 00:15:01.579 { 00:15:01.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.579 "dma_device_type": 2 00:15:01.579 } 00:15:01.579 ], 00:15:01.579 "driver_specific": {} 00:15:01.579 } 00:15:01.579 ] 00:15:01.579 21:11:24 -- common/autotest_common.sh@895 -- # return 0 00:15:01.579 21:11:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:01.579 21:11:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:01.579 21:11:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:01.579 21:11:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:01.579 21:11:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:01.579 21:11:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:01.579 21:11:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:01.579 21:11:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:01.579 21:11:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.579 21:11:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.579 21:11:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.579 21:11:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.579 21:11:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.579 21:11:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.838 21:11:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:01.838 "name": "Existed_Raid", 00:15:01.838 "uuid": "2591c48b-3ad1-482e-892b-a6e532ddcd94", 00:15:01.838 "strip_size_kb": 64, 00:15:01.838 "state": "configuring", 00:15:01.838 "raid_level": "raid0", 00:15:01.838 "superblock": true, 00:15:01.838 "num_base_bdevs": 3, 00:15:01.838 "num_base_bdevs_discovered": 2, 00:15:01.838 "num_base_bdevs_operational": 3, 00:15:01.838 "base_bdevs_list": [ 00:15:01.838 { 00:15:01.838 "name": "BaseBdev1", 00:15:01.838 "uuid": "65f52609-eaa8-4ebd-9f4f-d214ef6e2c84", 00:15:01.838 "is_configured": true, 00:15:01.838 "data_offset": 2048, 00:15:01.838 "data_size": 63488 00:15:01.838 }, 00:15:01.838 { 00:15:01.838 "name": "BaseBdev2", 00:15:01.838 "uuid": "1225e498-5587-4543-a8d1-1795fb1634d5", 00:15:01.838 "is_configured": true, 00:15:01.838 "data_offset": 2048, 00:15:01.838 "data_size": 63488 00:15:01.838 }, 00:15:01.838 { 00:15:01.838 "name": "BaseBdev3", 00:15:01.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.838 "is_configured": false, 00:15:01.838 "data_offset": 0, 00:15:01.838 "data_size": 0 00:15:01.838 } 00:15:01.838 ] 00:15:01.838 }' 00:15:01.838 21:11:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:01.838 21:11:24 -- common/autotest_common.sh@10 -- # set +x 00:15:02.774 21:11:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:02.774 [2024-06-07 21:11:25.354193] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.774 [2024-06-07 21:11:25.354702] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:02.774 [2024-06-07 21:11:25.354844] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:02.774 BaseBdev3 00:15:02.774 [2024-06-07 21:11:25.355076] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:02.774 [2024-06-07 21:11:25.355585] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:02.774 [2024-06-07 21:11:25.355703] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:15:02.774 [2024-06-07 21:11:25.355958] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.774 21:11:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:02.774 21:11:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:02.774 21:11:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:02.774 21:11:25 -- common/autotest_common.sh@889 -- # local i 00:15:02.774 21:11:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:02.774 21:11:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:02.774 21:11:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:03.032 21:11:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:03.291 [ 00:15:03.291 { 00:15:03.291 "name": "BaseBdev3", 00:15:03.291 "aliases": [ 00:15:03.291 "2ee33f17-1cdb-40d6-893a-fbe1652c9780" 00:15:03.291 ], 00:15:03.291 "product_name": "Malloc disk", 00:15:03.291 "block_size": 512, 00:15:03.291 "num_blocks": 65536, 00:15:03.291 "uuid": "2ee33f17-1cdb-40d6-893a-fbe1652c9780", 00:15:03.291 "assigned_rate_limits": { 00:15:03.291 "rw_ios_per_sec": 0, 00:15:03.291 "rw_mbytes_per_sec": 0, 00:15:03.291 "r_mbytes_per_sec": 0, 00:15:03.291 "w_mbytes_per_sec": 0 00:15:03.291 }, 00:15:03.291 "claimed": true, 00:15:03.291 "claim_type": "exclusive_write", 00:15:03.291 "zoned": false, 00:15:03.291 "supported_io_types": { 00:15:03.291 "read": true, 00:15:03.291 "write": true, 00:15:03.291 "unmap": true, 00:15:03.291 "write_zeroes": true, 00:15:03.291 "flush": true, 00:15:03.291 "reset": true, 00:15:03.291 "compare": false, 00:15:03.291 "compare_and_write": false, 00:15:03.291 "abort": true, 00:15:03.291 "nvme_admin": false, 00:15:03.291 "nvme_io": false 00:15:03.291 }, 00:15:03.291 "memory_domains": [ 00:15:03.291 { 00:15:03.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.291 "dma_device_type": 2 00:15:03.291 } 00:15:03.291 ], 00:15:03.291 "driver_specific": {} 00:15:03.291 } 00:15:03.291 ] 00:15:03.291 21:11:25 -- common/autotest_common.sh@895 -- # return 0 00:15:03.292 21:11:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:03.292 21:11:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:03.292 21:11:25 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:03.292 21:11:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:03.292 21:11:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:03.292 21:11:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:03.292 21:11:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:03.292 21:11:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:03.292 21:11:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:03.292 21:11:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:03.292 21:11:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:03.292 21:11:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:03.292 21:11:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.292 21:11:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.550 21:11:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:03.550 "name": "Existed_Raid", 00:15:03.550 "uuid": "2591c48b-3ad1-482e-892b-a6e532ddcd94", 00:15:03.550 "strip_size_kb": 64, 00:15:03.550 "state": "online", 00:15:03.550 "raid_level": "raid0", 00:15:03.550 "superblock": true, 00:15:03.550 "num_base_bdevs": 3, 00:15:03.550 "num_base_bdevs_discovered": 3, 00:15:03.550 "num_base_bdevs_operational": 3, 00:15:03.550 "base_bdevs_list": [ 00:15:03.550 { 00:15:03.550 "name": "BaseBdev1", 00:15:03.550 "uuid": "65f52609-eaa8-4ebd-9f4f-d214ef6e2c84", 00:15:03.550 "is_configured": true, 00:15:03.550 "data_offset": 2048, 00:15:03.550 "data_size": 63488 00:15:03.550 }, 00:15:03.550 { 00:15:03.550 "name": "BaseBdev2", 00:15:03.550 "uuid": "1225e498-5587-4543-a8d1-1795fb1634d5", 00:15:03.550 "is_configured": true, 00:15:03.550 "data_offset": 2048, 00:15:03.551 "data_size": 63488 00:15:03.551 }, 00:15:03.551 { 00:15:03.551 "name": "BaseBdev3", 00:15:03.551 "uuid": "2ee33f17-1cdb-40d6-893a-fbe1652c9780", 00:15:03.551 "is_configured": true, 00:15:03.551 "data_offset": 2048, 00:15:03.551 "data_size": 63488 00:15:03.551 } 00:15:03.551 ] 00:15:03.551 }' 00:15:03.551 21:11:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:03.551 21:11:26 -- common/autotest_common.sh@10 -- # set +x 00:15:04.118 21:11:26 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:04.377 [2024-06-07 21:11:26.998727] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:04.377 [2024-06-07 21:11:26.998934] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.377 [2024-06-07 21:11:26.999148] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.377 21:11:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.635 21:11:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:04.635 "name": "Existed_Raid", 00:15:04.635 "uuid": "2591c48b-3ad1-482e-892b-a6e532ddcd94", 00:15:04.635 "strip_size_kb": 64, 00:15:04.635 "state": "offline", 00:15:04.635 "raid_level": "raid0", 00:15:04.635 "superblock": true, 00:15:04.635 "num_base_bdevs": 3, 00:15:04.635 "num_base_bdevs_discovered": 2, 00:15:04.635 "num_base_bdevs_operational": 2, 00:15:04.635 "base_bdevs_list": [ 00:15:04.635 { 00:15:04.635 "name": null, 00:15:04.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.635 "is_configured": false, 00:15:04.635 "data_offset": 2048, 00:15:04.635 "data_size": 63488 00:15:04.635 }, 00:15:04.635 { 00:15:04.635 "name": "BaseBdev2", 00:15:04.635 "uuid": "1225e498-5587-4543-a8d1-1795fb1634d5", 00:15:04.635 "is_configured": true, 00:15:04.635 "data_offset": 2048, 00:15:04.635 "data_size": 63488 00:15:04.635 }, 00:15:04.635 { 00:15:04.635 "name": "BaseBdev3", 00:15:04.635 "uuid": "2ee33f17-1cdb-40d6-893a-fbe1652c9780", 00:15:04.635 "is_configured": true, 00:15:04.635 "data_offset": 2048, 00:15:04.635 "data_size": 63488 00:15:04.635 } 00:15:04.635 ] 00:15:04.635 }' 00:15:04.635 21:11:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:04.635 21:11:27 -- common/autotest_common.sh@10 -- # set +x 00:15:05.572 21:11:27 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:05.572 21:11:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:05.572 21:11:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.572 21:11:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:05.830 21:11:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:05.830 21:11:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:05.830 21:11:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:05.830 [2024-06-07 21:11:28.497023] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:06.089 21:11:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:06.089 21:11:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:06.089 21:11:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.089 21:11:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:06.089 21:11:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:06.089 21:11:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:06.089 21:11:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:06.348 [2024-06-07 21:11:28.932402] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:06.348 [2024-06-07 21:11:28.932677] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:15:06.348 21:11:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:06.348 21:11:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:06.348 21:11:28 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.348 21:11:28 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:06.607 21:11:29 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:06.607 21:11:29 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:06.607 21:11:29 -- bdev/bdev_raid.sh@287 -- # killprocess 128491 00:15:06.607 21:11:29 -- common/autotest_common.sh@926 -- # '[' -z 128491 ']' 00:15:06.607 21:11:29 -- common/autotest_common.sh@930 -- # kill -0 128491 00:15:06.607 21:11:29 -- common/autotest_common.sh@931 -- # uname 00:15:06.607 21:11:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:06.607 21:11:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128491 00:15:06.607 killing process with pid 128491 00:15:06.607 21:11:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:06.607 21:11:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:06.607 21:11:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128491' 00:15:06.607 21:11:29 -- common/autotest_common.sh@945 -- # kill 128491 00:15:06.607 21:11:29 -- common/autotest_common.sh@950 -- # wait 128491 00:15:06.607 [2024-06-07 21:11:29.230278] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.607 [2024-06-07 21:11:29.230415] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:06.866 ************************************ 00:15:06.866 END TEST raid_state_function_test_sb 00:15:06.866 ************************************ 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:06.866 00:15:06.866 real 0m12.581s 00:15:06.866 user 0m23.318s 00:15:06.866 sys 0m1.547s 00:15:06.866 21:11:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.866 21:11:29 -- common/autotest_common.sh@10 -- # set +x 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:15:06.866 21:11:29 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:06.866 21:11:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:06.866 21:11:29 -- common/autotest_common.sh@10 -- # set +x 00:15:06.866 ************************************ 00:15:06.866 START TEST raid_superblock_test 00:15:06.866 ************************************ 00:15:06.866 21:11:29 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@357 -- # raid_pid=128897 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:06.866 21:11:29 -- bdev/bdev_raid.sh@358 -- # waitforlisten 128897 /var/tmp/spdk-raid.sock 00:15:06.866 21:11:29 -- common/autotest_common.sh@819 -- # '[' -z 128897 ']' 00:15:06.866 21:11:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:06.866 21:11:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:06.866 21:11:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:06.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:06.866 21:11:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:06.866 21:11:29 -- common/autotest_common.sh@10 -- # set +x 00:15:07.125 [2024-06-07 21:11:29.580083] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:07.125 [2024-06-07 21:11:29.580583] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128897 ] 00:15:07.125 [2024-06-07 21:11:29.739982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.383 [2024-06-07 21:11:29.801653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.383 [2024-06-07 21:11:29.854283] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.951 21:11:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:07.951 21:11:30 -- common/autotest_common.sh@852 -- # return 0 00:15:07.951 21:11:30 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:07.951 21:11:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:07.951 21:11:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:07.951 21:11:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:07.951 21:11:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:07.951 21:11:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:07.951 21:11:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:07.951 21:11:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:07.951 21:11:30 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:08.209 malloc1 00:15:08.210 21:11:30 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:08.468 [2024-06-07 21:11:30.971211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:08.468 [2024-06-07 21:11:30.971494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.468 [2024-06-07 21:11:30.971648] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:08.468 [2024-06-07 21:11:30.971792] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.468 [2024-06-07 21:11:30.974220] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.468 [2024-06-07 21:11:30.974390] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:08.468 pt1 00:15:08.468 21:11:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:08.468 21:11:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:08.468 21:11:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:08.468 21:11:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:08.468 21:11:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:08.468 21:11:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.468 21:11:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.468 21:11:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.468 21:11:30 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:08.726 malloc2 00:15:08.726 21:11:31 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:08.985 [2024-06-07 21:11:31.437743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:08.985 [2024-06-07 21:11:31.438074] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.985 [2024-06-07 21:11:31.438158] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:08.985 [2024-06-07 21:11:31.438444] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.985 [2024-06-07 21:11:31.440817] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.985 [2024-06-07 21:11:31.441053] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:08.985 pt2 00:15:08.985 21:11:31 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:08.985 21:11:31 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:08.985 21:11:31 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:08.985 21:11:31 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:08.985 21:11:31 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:08.985 21:11:31 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.985 21:11:31 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.985 21:11:31 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.985 21:11:31 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:09.244 malloc3 00:15:09.244 21:11:31 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:09.244 [2024-06-07 21:11:31.885093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:09.244 [2024-06-07 21:11:31.885389] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.244 [2024-06-07 21:11:31.885549] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:09.244 [2024-06-07 21:11:31.885698] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.244 [2024-06-07 21:11:31.888012] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.244 [2024-06-07 21:11:31.888197] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:09.244 pt3 00:15:09.244 21:11:31 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:09.244 21:11:31 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:09.244 21:11:31 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:09.530 [2024-06-07 21:11:32.097202] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:09.530 [2024-06-07 21:11:32.099282] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.530 [2024-06-07 21:11:32.099498] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:09.530 [2024-06-07 21:11:32.099735] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:15:09.530 [2024-06-07 21:11:32.099844] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:09.530 [2024-06-07 21:11:32.100020] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:15:09.530 [2024-06-07 21:11:32.100434] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:15:09.530 [2024-06-07 21:11:32.100542] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:15:09.530 [2024-06-07 21:11:32.100775] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.530 21:11:32 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:09.530 21:11:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:09.530 21:11:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:09.530 21:11:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:09.530 21:11:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:09.530 21:11:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:09.530 21:11:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:09.530 21:11:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:09.530 21:11:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:09.530 21:11:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:09.530 21:11:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.530 21:11:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.797 21:11:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:09.797 "name": "raid_bdev1", 00:15:09.797 "uuid": "78adf202-e727-4422-a4bb-ed485b337472", 00:15:09.797 "strip_size_kb": 64, 00:15:09.797 "state": "online", 00:15:09.797 "raid_level": "raid0", 00:15:09.797 "superblock": true, 00:15:09.797 "num_base_bdevs": 3, 00:15:09.797 "num_base_bdevs_discovered": 3, 00:15:09.797 "num_base_bdevs_operational": 3, 00:15:09.797 "base_bdevs_list": [ 00:15:09.797 { 00:15:09.797 "name": "pt1", 00:15:09.797 "uuid": "ceff5d9e-9557-5c0e-9909-40579b6c3340", 00:15:09.797 "is_configured": true, 00:15:09.797 "data_offset": 2048, 00:15:09.797 "data_size": 63488 00:15:09.797 }, 00:15:09.797 { 00:15:09.797 "name": "pt2", 00:15:09.797 "uuid": "1caad8d7-8591-5ebf-95f1-81fb0da89513", 00:15:09.797 "is_configured": true, 00:15:09.797 "data_offset": 2048, 00:15:09.797 "data_size": 63488 00:15:09.797 }, 00:15:09.797 { 00:15:09.797 "name": "pt3", 00:15:09.797 "uuid": "42f5378e-59ff-5f0e-b399-5ae1985e6385", 00:15:09.797 "is_configured": true, 00:15:09.797 "data_offset": 2048, 00:15:09.797 "data_size": 63488 00:15:09.797 } 00:15:09.797 ] 00:15:09.797 }' 00:15:09.797 21:11:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:09.797 21:11:32 -- common/autotest_common.sh@10 -- # set +x 00:15:10.364 21:11:32 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:10.364 21:11:32 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:10.623 [2024-06-07 21:11:33.241692] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.623 21:11:33 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=78adf202-e727-4422-a4bb-ed485b337472 00:15:10.623 21:11:33 -- bdev/bdev_raid.sh@380 -- # '[' -z 78adf202-e727-4422-a4bb-ed485b337472 ']' 00:15:10.623 21:11:33 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:10.881 [2024-06-07 21:11:33.481505] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:10.881 [2024-06-07 21:11:33.481680] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:10.882 [2024-06-07 21:11:33.481877] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.882 [2024-06-07 21:11:33.482104] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.882 [2024-06-07 21:11:33.482218] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:15:10.882 21:11:33 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.882 21:11:33 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:11.140 21:11:33 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:11.140 21:11:33 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:11.140 21:11:33 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.140 21:11:33 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:11.399 21:11:33 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.399 21:11:33 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:11.657 21:11:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.657 21:11:34 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:11.916 21:11:34 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:11.916 21:11:34 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:12.175 21:11:34 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:12.175 21:11:34 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:12.175 21:11:34 -- common/autotest_common.sh@640 -- # local es=0 00:15:12.175 21:11:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:12.175 21:11:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.175 21:11:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:12.175 21:11:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.175 21:11:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:12.175 21:11:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.175 21:11:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:12.175 21:11:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.175 21:11:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:12.175 21:11:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:12.175 [2024-06-07 21:11:34.813827] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:12.175 [2024-06-07 21:11:34.815923] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:12.175 [2024-06-07 21:11:34.816125] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:12.175 [2024-06-07 21:11:34.816223] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:12.175 [2024-06-07 21:11:34.816570] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:12.175 [2024-06-07 21:11:34.816742] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:12.175 [2024-06-07 21:11:34.816927] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.175 [2024-06-07 21:11:34.816973] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:15:12.175 request: 00:15:12.175 { 00:15:12.175 "name": "raid_bdev1", 00:15:12.175 "raid_level": "raid0", 00:15:12.175 "base_bdevs": [ 00:15:12.175 "malloc1", 00:15:12.175 "malloc2", 00:15:12.175 "malloc3" 00:15:12.175 ], 00:15:12.175 "superblock": false, 00:15:12.175 "strip_size_kb": 64, 00:15:12.175 "method": "bdev_raid_create", 00:15:12.175 "req_id": 1 00:15:12.175 } 00:15:12.175 Got JSON-RPC error response 00:15:12.175 response: 00:15:12.175 { 00:15:12.175 "code": -17, 00:15:12.175 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:12.175 } 00:15:12.175 21:11:34 -- common/autotest_common.sh@643 -- # es=1 00:15:12.175 21:11:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:12.175 21:11:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:12.175 21:11:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:12.175 21:11:34 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.175 21:11:34 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:12.434 21:11:35 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:12.434 21:11:35 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:12.434 21:11:35 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:12.694 [2024-06-07 21:11:35.229847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:12.694 [2024-06-07 21:11:35.230088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.694 [2024-06-07 21:11:35.230162] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:12.694 [2024-06-07 21:11:35.230317] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.694 [2024-06-07 21:11:35.232613] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.694 [2024-06-07 21:11:35.232797] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:12.694 [2024-06-07 21:11:35.233078] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:12.694 [2024-06-07 21:11:35.233221] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:12.694 pt1 00:15:12.694 21:11:35 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:12.694 21:11:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:12.694 21:11:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:12.694 21:11:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:12.694 21:11:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:12.694 21:11:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:12.694 21:11:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:12.694 21:11:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:12.694 21:11:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:12.694 21:11:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:12.694 21:11:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.694 21:11:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.953 21:11:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:12.953 "name": "raid_bdev1", 00:15:12.953 "uuid": "78adf202-e727-4422-a4bb-ed485b337472", 00:15:12.953 "strip_size_kb": 64, 00:15:12.953 "state": "configuring", 00:15:12.953 "raid_level": "raid0", 00:15:12.953 "superblock": true, 00:15:12.953 "num_base_bdevs": 3, 00:15:12.953 "num_base_bdevs_discovered": 1, 00:15:12.953 "num_base_bdevs_operational": 3, 00:15:12.953 "base_bdevs_list": [ 00:15:12.953 { 00:15:12.953 "name": "pt1", 00:15:12.953 "uuid": "ceff5d9e-9557-5c0e-9909-40579b6c3340", 00:15:12.953 "is_configured": true, 00:15:12.953 "data_offset": 2048, 00:15:12.953 "data_size": 63488 00:15:12.953 }, 00:15:12.953 { 00:15:12.953 "name": null, 00:15:12.953 "uuid": "1caad8d7-8591-5ebf-95f1-81fb0da89513", 00:15:12.953 "is_configured": false, 00:15:12.953 "data_offset": 2048, 00:15:12.953 "data_size": 63488 00:15:12.953 }, 00:15:12.953 { 00:15:12.953 "name": null, 00:15:12.953 "uuid": "42f5378e-59ff-5f0e-b399-5ae1985e6385", 00:15:12.953 "is_configured": false, 00:15:12.953 "data_offset": 2048, 00:15:12.953 "data_size": 63488 00:15:12.953 } 00:15:12.953 ] 00:15:12.953 }' 00:15:12.953 21:11:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:12.953 21:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:13.520 21:11:36 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:15:13.520 21:11:36 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:13.779 [2024-06-07 21:11:36.394182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:13.779 [2024-06-07 21:11:36.394487] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.779 [2024-06-07 21:11:36.394658] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:13.779 [2024-06-07 21:11:36.394823] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.779 [2024-06-07 21:11:36.395423] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.779 [2024-06-07 21:11:36.395573] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:13.779 [2024-06-07 21:11:36.395782] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:13.779 [2024-06-07 21:11:36.395912] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:13.779 pt2 00:15:13.779 21:11:36 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:14.037 [2024-06-07 21:11:36.598236] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:14.037 21:11:36 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:14.037 21:11:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:14.037 21:11:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:14.037 21:11:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:14.037 21:11:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:14.037 21:11:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:14.037 21:11:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:14.037 21:11:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:14.037 21:11:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:14.037 21:11:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:14.037 21:11:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.037 21:11:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.295 21:11:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.295 "name": "raid_bdev1", 00:15:14.295 "uuid": "78adf202-e727-4422-a4bb-ed485b337472", 00:15:14.295 "strip_size_kb": 64, 00:15:14.295 "state": "configuring", 00:15:14.295 "raid_level": "raid0", 00:15:14.295 "superblock": true, 00:15:14.295 "num_base_bdevs": 3, 00:15:14.295 "num_base_bdevs_discovered": 1, 00:15:14.295 "num_base_bdevs_operational": 3, 00:15:14.295 "base_bdevs_list": [ 00:15:14.295 { 00:15:14.295 "name": "pt1", 00:15:14.295 "uuid": "ceff5d9e-9557-5c0e-9909-40579b6c3340", 00:15:14.295 "is_configured": true, 00:15:14.295 "data_offset": 2048, 00:15:14.295 "data_size": 63488 00:15:14.295 }, 00:15:14.295 { 00:15:14.295 "name": null, 00:15:14.295 "uuid": "1caad8d7-8591-5ebf-95f1-81fb0da89513", 00:15:14.295 "is_configured": false, 00:15:14.295 "data_offset": 2048, 00:15:14.296 "data_size": 63488 00:15:14.296 }, 00:15:14.296 { 00:15:14.296 "name": null, 00:15:14.296 "uuid": "42f5378e-59ff-5f0e-b399-5ae1985e6385", 00:15:14.296 "is_configured": false, 00:15:14.296 "data_offset": 2048, 00:15:14.296 "data_size": 63488 00:15:14.296 } 00:15:14.296 ] 00:15:14.296 }' 00:15:14.296 21:11:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.296 21:11:36 -- common/autotest_common.sh@10 -- # set +x 00:15:14.861 21:11:37 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:14.861 21:11:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:14.861 21:11:37 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:15.166 [2024-06-07 21:11:37.706414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:15.166 [2024-06-07 21:11:37.706671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.166 [2024-06-07 21:11:37.706759] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:15.166 [2024-06-07 21:11:37.707025] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.166 [2024-06-07 21:11:37.707577] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.166 [2024-06-07 21:11:37.707725] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:15.166 [2024-06-07 21:11:37.707939] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:15.166 [2024-06-07 21:11:37.708070] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:15.166 pt2 00:15:15.166 21:11:37 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:15.166 21:11:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:15.166 21:11:37 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:15.425 [2024-06-07 21:11:37.918455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:15.425 [2024-06-07 21:11:37.918751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.425 [2024-06-07 21:11:37.918828] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:15.425 [2024-06-07 21:11:37.919066] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.425 [2024-06-07 21:11:37.919567] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.425 [2024-06-07 21:11:37.919748] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:15.425 [2024-06-07 21:11:37.919953] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:15.425 [2024-06-07 21:11:37.920066] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:15.425 [2024-06-07 21:11:37.920227] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:15:15.425 [2024-06-07 21:11:37.920350] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:15.425 [2024-06-07 21:11:37.920490] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:15.425 [2024-06-07 21:11:37.920862] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:15:15.425 [2024-06-07 21:11:37.921013] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:15:15.425 [2024-06-07 21:11:37.921219] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.425 pt3 00:15:15.425 21:11:37 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:15.425 21:11:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:15.425 21:11:37 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:15.425 21:11:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:15.425 21:11:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:15.425 21:11:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:15.425 21:11:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:15.425 21:11:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:15.425 21:11:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:15.425 21:11:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:15.425 21:11:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:15.425 21:11:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:15.425 21:11:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.425 21:11:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.683 21:11:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:15.683 "name": "raid_bdev1", 00:15:15.683 "uuid": "78adf202-e727-4422-a4bb-ed485b337472", 00:15:15.683 "strip_size_kb": 64, 00:15:15.683 "state": "online", 00:15:15.683 "raid_level": "raid0", 00:15:15.683 "superblock": true, 00:15:15.683 "num_base_bdevs": 3, 00:15:15.683 "num_base_bdevs_discovered": 3, 00:15:15.683 "num_base_bdevs_operational": 3, 00:15:15.683 "base_bdevs_list": [ 00:15:15.683 { 00:15:15.683 "name": "pt1", 00:15:15.683 "uuid": "ceff5d9e-9557-5c0e-9909-40579b6c3340", 00:15:15.683 "is_configured": true, 00:15:15.683 "data_offset": 2048, 00:15:15.683 "data_size": 63488 00:15:15.683 }, 00:15:15.683 { 00:15:15.683 "name": "pt2", 00:15:15.683 "uuid": "1caad8d7-8591-5ebf-95f1-81fb0da89513", 00:15:15.683 "is_configured": true, 00:15:15.683 "data_offset": 2048, 00:15:15.683 "data_size": 63488 00:15:15.683 }, 00:15:15.683 { 00:15:15.683 "name": "pt3", 00:15:15.683 "uuid": "42f5378e-59ff-5f0e-b399-5ae1985e6385", 00:15:15.683 "is_configured": true, 00:15:15.683 "data_offset": 2048, 00:15:15.683 "data_size": 63488 00:15:15.683 } 00:15:15.683 ] 00:15:15.683 }' 00:15:15.683 21:11:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:15.683 21:11:38 -- common/autotest_common.sh@10 -- # set +x 00:15:16.249 21:11:38 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:16.249 21:11:38 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:16.507 [2024-06-07 21:11:39.127029] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.507 21:11:39 -- bdev/bdev_raid.sh@430 -- # '[' 78adf202-e727-4422-a4bb-ed485b337472 '!=' 78adf202-e727-4422-a4bb-ed485b337472 ']' 00:15:16.507 21:11:39 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:16.507 21:11:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:16.507 21:11:39 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:16.507 21:11:39 -- bdev/bdev_raid.sh@511 -- # killprocess 128897 00:15:16.507 21:11:39 -- common/autotest_common.sh@926 -- # '[' -z 128897 ']' 00:15:16.507 21:11:39 -- common/autotest_common.sh@930 -- # kill -0 128897 00:15:16.507 21:11:39 -- common/autotest_common.sh@931 -- # uname 00:15:16.507 21:11:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:16.507 21:11:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128897 00:15:16.507 killing process with pid 128897 00:15:16.507 21:11:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:16.507 21:11:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:16.507 21:11:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128897' 00:15:16.507 21:11:39 -- common/autotest_common.sh@945 -- # kill 128897 00:15:16.507 21:11:39 -- common/autotest_common.sh@950 -- # wait 128897 00:15:16.507 [2024-06-07 21:11:39.164030] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.507 [2024-06-07 21:11:39.164154] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.507 [2024-06-07 21:11:39.164218] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.507 [2024-06-07 21:11:39.164341] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:15:16.766 [2024-06-07 21:11:39.196030] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:16.766 21:11:39 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:16.766 00:15:16.766 real 0m9.901s 00:15:16.766 user 0m18.175s 00:15:16.766 sys 0m1.190s 00:15:16.766 21:11:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.766 21:11:39 -- common/autotest_common.sh@10 -- # set +x 00:15:16.766 ************************************ 00:15:16.766 END TEST raid_superblock_test 00:15:16.766 ************************************ 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:15:17.025 21:11:39 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:17.025 21:11:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:17.025 21:11:39 -- common/autotest_common.sh@10 -- # set +x 00:15:17.025 ************************************ 00:15:17.025 START TEST raid_state_function_test 00:15:17.025 ************************************ 00:15:17.025 21:11:39 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@226 -- # raid_pid=129212 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:17.025 Process raid pid: 129212 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129212' 00:15:17.025 21:11:39 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129212 /var/tmp/spdk-raid.sock 00:15:17.025 21:11:39 -- common/autotest_common.sh@819 -- # '[' -z 129212 ']' 00:15:17.025 21:11:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:17.025 21:11:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:17.025 21:11:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:17.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:17.025 21:11:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:17.025 21:11:39 -- common/autotest_common.sh@10 -- # set +x 00:15:17.025 [2024-06-07 21:11:39.545670] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:17.025 [2024-06-07 21:11:39.546145] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.025 [2024-06-07 21:11:39.693106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.284 [2024-06-07 21:11:39.778451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.284 [2024-06-07 21:11:39.837641] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.851 21:11:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:17.851 21:11:40 -- common/autotest_common.sh@852 -- # return 0 00:15:17.851 21:11:40 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:18.286 [2024-06-07 21:11:40.727180] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.286 [2024-06-07 21:11:40.727491] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.286 [2024-06-07 21:11:40.727622] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.286 [2024-06-07 21:11:40.727680] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.286 [2024-06-07 21:11:40.727894] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:18.286 [2024-06-07 21:11:40.727975] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:18.286 21:11:40 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:18.286 21:11:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:18.286 21:11:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:18.286 21:11:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:18.286 21:11:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:18.286 21:11:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:18.286 21:11:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:18.286 21:11:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:18.286 21:11:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:18.286 21:11:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:18.286 21:11:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.286 21:11:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.545 21:11:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:18.545 "name": "Existed_Raid", 00:15:18.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.545 "strip_size_kb": 64, 00:15:18.545 "state": "configuring", 00:15:18.545 "raid_level": "concat", 00:15:18.545 "superblock": false, 00:15:18.545 "num_base_bdevs": 3, 00:15:18.545 "num_base_bdevs_discovered": 0, 00:15:18.545 "num_base_bdevs_operational": 3, 00:15:18.545 "base_bdevs_list": [ 00:15:18.545 { 00:15:18.545 "name": "BaseBdev1", 00:15:18.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.545 "is_configured": false, 00:15:18.545 "data_offset": 0, 00:15:18.545 "data_size": 0 00:15:18.545 }, 00:15:18.545 { 00:15:18.545 "name": "BaseBdev2", 00:15:18.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.545 "is_configured": false, 00:15:18.545 "data_offset": 0, 00:15:18.545 "data_size": 0 00:15:18.545 }, 00:15:18.545 { 00:15:18.545 "name": "BaseBdev3", 00:15:18.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.545 "is_configured": false, 00:15:18.545 "data_offset": 0, 00:15:18.545 "data_size": 0 00:15:18.545 } 00:15:18.545 ] 00:15:18.545 }' 00:15:18.545 21:11:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:18.545 21:11:40 -- common/autotest_common.sh@10 -- # set +x 00:15:19.110 21:11:41 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:19.370 [2024-06-07 21:11:41.839407] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.370 [2024-06-07 21:11:41.839673] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:19.370 21:11:41 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:19.628 [2024-06-07 21:11:42.095452] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.628 [2024-06-07 21:11:42.095700] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.628 [2024-06-07 21:11:42.095808] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.628 [2024-06-07 21:11:42.095864] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.628 [2024-06-07 21:11:42.095950] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.628 [2024-06-07 21:11:42.096017] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.628 21:11:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:19.886 [2024-06-07 21:11:42.310893] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.886 BaseBdev1 00:15:19.886 21:11:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:19.886 21:11:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:19.886 21:11:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:19.886 21:11:42 -- common/autotest_common.sh@889 -- # local i 00:15:19.886 21:11:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:19.886 21:11:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:19.886 21:11:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:19.887 21:11:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:20.145 [ 00:15:20.145 { 00:15:20.145 "name": "BaseBdev1", 00:15:20.145 "aliases": [ 00:15:20.145 "a60d5ff8-f986-49b3-9c05-47d7a20abe79" 00:15:20.145 ], 00:15:20.145 "product_name": "Malloc disk", 00:15:20.145 "block_size": 512, 00:15:20.145 "num_blocks": 65536, 00:15:20.145 "uuid": "a60d5ff8-f986-49b3-9c05-47d7a20abe79", 00:15:20.145 "assigned_rate_limits": { 00:15:20.145 "rw_ios_per_sec": 0, 00:15:20.145 "rw_mbytes_per_sec": 0, 00:15:20.145 "r_mbytes_per_sec": 0, 00:15:20.145 "w_mbytes_per_sec": 0 00:15:20.145 }, 00:15:20.145 "claimed": true, 00:15:20.145 "claim_type": "exclusive_write", 00:15:20.145 "zoned": false, 00:15:20.145 "supported_io_types": { 00:15:20.145 "read": true, 00:15:20.145 "write": true, 00:15:20.146 "unmap": true, 00:15:20.146 "write_zeroes": true, 00:15:20.146 "flush": true, 00:15:20.146 "reset": true, 00:15:20.146 "compare": false, 00:15:20.146 "compare_and_write": false, 00:15:20.146 "abort": true, 00:15:20.146 "nvme_admin": false, 00:15:20.146 "nvme_io": false 00:15:20.146 }, 00:15:20.146 "memory_domains": [ 00:15:20.146 { 00:15:20.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.146 "dma_device_type": 2 00:15:20.146 } 00:15:20.146 ], 00:15:20.146 "driver_specific": {} 00:15:20.146 } 00:15:20.146 ] 00:15:20.146 21:11:42 -- common/autotest_common.sh@895 -- # return 0 00:15:20.146 21:11:42 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:20.146 21:11:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:20.146 21:11:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:20.146 21:11:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:20.146 21:11:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:20.146 21:11:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:20.146 21:11:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.146 21:11:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.146 21:11:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.146 21:11:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.146 21:11:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.146 21:11:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.404 21:11:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.404 "name": "Existed_Raid", 00:15:20.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.404 "strip_size_kb": 64, 00:15:20.404 "state": "configuring", 00:15:20.404 "raid_level": "concat", 00:15:20.404 "superblock": false, 00:15:20.404 "num_base_bdevs": 3, 00:15:20.404 "num_base_bdevs_discovered": 1, 00:15:20.404 "num_base_bdevs_operational": 3, 00:15:20.404 "base_bdevs_list": [ 00:15:20.404 { 00:15:20.405 "name": "BaseBdev1", 00:15:20.405 "uuid": "a60d5ff8-f986-49b3-9c05-47d7a20abe79", 00:15:20.405 "is_configured": true, 00:15:20.405 "data_offset": 0, 00:15:20.405 "data_size": 65536 00:15:20.405 }, 00:15:20.405 { 00:15:20.405 "name": "BaseBdev2", 00:15:20.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.405 "is_configured": false, 00:15:20.405 "data_offset": 0, 00:15:20.405 "data_size": 0 00:15:20.405 }, 00:15:20.405 { 00:15:20.405 "name": "BaseBdev3", 00:15:20.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.405 "is_configured": false, 00:15:20.405 "data_offset": 0, 00:15:20.405 "data_size": 0 00:15:20.405 } 00:15:20.405 ] 00:15:20.405 }' 00:15:20.405 21:11:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.405 21:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:20.970 21:11:43 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:21.227 [2024-06-07 21:11:43.859402] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.227 [2024-06-07 21:11:43.859672] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:21.227 21:11:43 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:21.227 21:11:43 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:21.486 [2024-06-07 21:11:44.063529] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.486 [2024-06-07 21:11:44.065758] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.486 [2024-06-07 21:11:44.065991] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.486 [2024-06-07 21:11:44.066120] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.486 [2024-06-07 21:11:44.066185] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.486 21:11:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:21.486 21:11:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:21.486 21:11:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:21.486 21:11:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:21.486 21:11:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:21.486 21:11:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:21.486 21:11:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:21.486 21:11:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:21.486 21:11:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.486 21:11:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.486 21:11:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.486 21:11:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.486 21:11:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.486 21:11:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.754 21:11:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.754 "name": "Existed_Raid", 00:15:21.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.754 "strip_size_kb": 64, 00:15:21.754 "state": "configuring", 00:15:21.754 "raid_level": "concat", 00:15:21.754 "superblock": false, 00:15:21.754 "num_base_bdevs": 3, 00:15:21.754 "num_base_bdevs_discovered": 1, 00:15:21.754 "num_base_bdevs_operational": 3, 00:15:21.754 "base_bdevs_list": [ 00:15:21.754 { 00:15:21.755 "name": "BaseBdev1", 00:15:21.755 "uuid": "a60d5ff8-f986-49b3-9c05-47d7a20abe79", 00:15:21.755 "is_configured": true, 00:15:21.755 "data_offset": 0, 00:15:21.755 "data_size": 65536 00:15:21.755 }, 00:15:21.755 { 00:15:21.755 "name": "BaseBdev2", 00:15:21.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.755 "is_configured": false, 00:15:21.755 "data_offset": 0, 00:15:21.755 "data_size": 0 00:15:21.755 }, 00:15:21.755 { 00:15:21.755 "name": "BaseBdev3", 00:15:21.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.755 "is_configured": false, 00:15:21.755 "data_offset": 0, 00:15:21.755 "data_size": 0 00:15:21.755 } 00:15:21.755 ] 00:15:21.755 }' 00:15:21.755 21:11:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.755 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:15:22.329 21:11:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:22.654 [2024-06-07 21:11:45.220724] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.654 BaseBdev2 00:15:22.654 21:11:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:22.654 21:11:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:22.654 21:11:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:22.654 21:11:45 -- common/autotest_common.sh@889 -- # local i 00:15:22.654 21:11:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:22.654 21:11:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:22.654 21:11:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:22.912 21:11:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:23.280 [ 00:15:23.280 { 00:15:23.280 "name": "BaseBdev2", 00:15:23.280 "aliases": [ 00:15:23.280 "e9a92388-b872-4732-b316-e59fa1b2a57c" 00:15:23.280 ], 00:15:23.280 "product_name": "Malloc disk", 00:15:23.280 "block_size": 512, 00:15:23.280 "num_blocks": 65536, 00:15:23.280 "uuid": "e9a92388-b872-4732-b316-e59fa1b2a57c", 00:15:23.280 "assigned_rate_limits": { 00:15:23.280 "rw_ios_per_sec": 0, 00:15:23.280 "rw_mbytes_per_sec": 0, 00:15:23.280 "r_mbytes_per_sec": 0, 00:15:23.280 "w_mbytes_per_sec": 0 00:15:23.280 }, 00:15:23.280 "claimed": true, 00:15:23.280 "claim_type": "exclusive_write", 00:15:23.280 "zoned": false, 00:15:23.280 "supported_io_types": { 00:15:23.280 "read": true, 00:15:23.280 "write": true, 00:15:23.280 "unmap": true, 00:15:23.280 "write_zeroes": true, 00:15:23.280 "flush": true, 00:15:23.280 "reset": true, 00:15:23.280 "compare": false, 00:15:23.280 "compare_and_write": false, 00:15:23.280 "abort": true, 00:15:23.280 "nvme_admin": false, 00:15:23.280 "nvme_io": false 00:15:23.280 }, 00:15:23.280 "memory_domains": [ 00:15:23.280 { 00:15:23.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.280 "dma_device_type": 2 00:15:23.280 } 00:15:23.280 ], 00:15:23.280 "driver_specific": {} 00:15:23.280 } 00:15:23.280 ] 00:15:23.280 21:11:45 -- common/autotest_common.sh@895 -- # return 0 00:15:23.280 21:11:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:23.280 21:11:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:23.280 21:11:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:23.280 21:11:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:23.280 21:11:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:23.280 21:11:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:23.280 21:11:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:23.280 21:11:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:23.280 21:11:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:23.280 21:11:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:23.280 21:11:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:23.280 21:11:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:23.280 21:11:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.280 21:11:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.280 21:11:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:23.280 "name": "Existed_Raid", 00:15:23.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.280 "strip_size_kb": 64, 00:15:23.280 "state": "configuring", 00:15:23.280 "raid_level": "concat", 00:15:23.281 "superblock": false, 00:15:23.281 "num_base_bdevs": 3, 00:15:23.281 "num_base_bdevs_discovered": 2, 00:15:23.281 "num_base_bdevs_operational": 3, 00:15:23.281 "base_bdevs_list": [ 00:15:23.281 { 00:15:23.281 "name": "BaseBdev1", 00:15:23.281 "uuid": "a60d5ff8-f986-49b3-9c05-47d7a20abe79", 00:15:23.281 "is_configured": true, 00:15:23.281 "data_offset": 0, 00:15:23.281 "data_size": 65536 00:15:23.281 }, 00:15:23.281 { 00:15:23.281 "name": "BaseBdev2", 00:15:23.281 "uuid": "e9a92388-b872-4732-b316-e59fa1b2a57c", 00:15:23.281 "is_configured": true, 00:15:23.281 "data_offset": 0, 00:15:23.281 "data_size": 65536 00:15:23.281 }, 00:15:23.281 { 00:15:23.281 "name": "BaseBdev3", 00:15:23.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.281 "is_configured": false, 00:15:23.281 "data_offset": 0, 00:15:23.281 "data_size": 0 00:15:23.281 } 00:15:23.281 ] 00:15:23.281 }' 00:15:23.281 21:11:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:23.281 21:11:45 -- common/autotest_common.sh@10 -- # set +x 00:15:24.213 21:11:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:24.213 [2024-06-07 21:11:46.801991] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.213 [2024-06-07 21:11:46.802246] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:24.213 [2024-06-07 21:11:46.802288] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:24.213 [2024-06-07 21:11:46.802550] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:24.213 [2024-06-07 21:11:46.803101] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:24.213 [2024-06-07 21:11:46.803280] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:24.213 [2024-06-07 21:11:46.803645] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.213 BaseBdev3 00:15:24.213 21:11:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:24.213 21:11:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:24.213 21:11:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:24.213 21:11:46 -- common/autotest_common.sh@889 -- # local i 00:15:24.213 21:11:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:24.213 21:11:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:24.213 21:11:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:24.471 21:11:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:24.729 [ 00:15:24.729 { 00:15:24.729 "name": "BaseBdev3", 00:15:24.729 "aliases": [ 00:15:24.729 "cc47d856-43ea-4f94-be6b-07d05cdc4845" 00:15:24.729 ], 00:15:24.729 "product_name": "Malloc disk", 00:15:24.729 "block_size": 512, 00:15:24.729 "num_blocks": 65536, 00:15:24.729 "uuid": "cc47d856-43ea-4f94-be6b-07d05cdc4845", 00:15:24.729 "assigned_rate_limits": { 00:15:24.729 "rw_ios_per_sec": 0, 00:15:24.729 "rw_mbytes_per_sec": 0, 00:15:24.729 "r_mbytes_per_sec": 0, 00:15:24.729 "w_mbytes_per_sec": 0 00:15:24.729 }, 00:15:24.729 "claimed": true, 00:15:24.729 "claim_type": "exclusive_write", 00:15:24.729 "zoned": false, 00:15:24.729 "supported_io_types": { 00:15:24.729 "read": true, 00:15:24.729 "write": true, 00:15:24.729 "unmap": true, 00:15:24.729 "write_zeroes": true, 00:15:24.729 "flush": true, 00:15:24.729 "reset": true, 00:15:24.729 "compare": false, 00:15:24.729 "compare_and_write": false, 00:15:24.729 "abort": true, 00:15:24.729 "nvme_admin": false, 00:15:24.729 "nvme_io": false 00:15:24.729 }, 00:15:24.729 "memory_domains": [ 00:15:24.729 { 00:15:24.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.729 "dma_device_type": 2 00:15:24.729 } 00:15:24.729 ], 00:15:24.729 "driver_specific": {} 00:15:24.729 } 00:15:24.729 ] 00:15:24.729 21:11:47 -- common/autotest_common.sh@895 -- # return 0 00:15:24.729 21:11:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:24.729 21:11:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:24.729 21:11:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:24.729 21:11:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.729 21:11:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:24.729 21:11:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:24.729 21:11:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:24.729 21:11:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:24.729 21:11:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.729 21:11:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.729 21:11:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.729 21:11:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.729 21:11:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.729 21:11:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.987 21:11:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:24.987 "name": "Existed_Raid", 00:15:24.987 "uuid": "b941b4dd-b52a-4ec8-8de4-2aa4a2084881", 00:15:24.987 "strip_size_kb": 64, 00:15:24.987 "state": "online", 00:15:24.987 "raid_level": "concat", 00:15:24.987 "superblock": false, 00:15:24.987 "num_base_bdevs": 3, 00:15:24.987 "num_base_bdevs_discovered": 3, 00:15:24.987 "num_base_bdevs_operational": 3, 00:15:24.987 "base_bdevs_list": [ 00:15:24.987 { 00:15:24.987 "name": "BaseBdev1", 00:15:24.987 "uuid": "a60d5ff8-f986-49b3-9c05-47d7a20abe79", 00:15:24.987 "is_configured": true, 00:15:24.987 "data_offset": 0, 00:15:24.987 "data_size": 65536 00:15:24.987 }, 00:15:24.987 { 00:15:24.987 "name": "BaseBdev2", 00:15:24.987 "uuid": "e9a92388-b872-4732-b316-e59fa1b2a57c", 00:15:24.987 "is_configured": true, 00:15:24.987 "data_offset": 0, 00:15:24.987 "data_size": 65536 00:15:24.987 }, 00:15:24.987 { 00:15:24.987 "name": "BaseBdev3", 00:15:24.987 "uuid": "cc47d856-43ea-4f94-be6b-07d05cdc4845", 00:15:24.987 "is_configured": true, 00:15:24.987 "data_offset": 0, 00:15:24.987 "data_size": 65536 00:15:24.987 } 00:15:24.987 ] 00:15:24.987 }' 00:15:24.987 21:11:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:24.987 21:11:47 -- common/autotest_common.sh@10 -- # set +x 00:15:25.553 21:11:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:25.812 [2024-06-07 21:11:48.305427] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.812 [2024-06-07 21:11:48.305656] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.812 [2024-06-07 21:11:48.305859] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.812 21:11:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.070 21:11:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:26.070 "name": "Existed_Raid", 00:15:26.070 "uuid": "b941b4dd-b52a-4ec8-8de4-2aa4a2084881", 00:15:26.070 "strip_size_kb": 64, 00:15:26.070 "state": "offline", 00:15:26.070 "raid_level": "concat", 00:15:26.070 "superblock": false, 00:15:26.070 "num_base_bdevs": 3, 00:15:26.070 "num_base_bdevs_discovered": 2, 00:15:26.070 "num_base_bdevs_operational": 2, 00:15:26.070 "base_bdevs_list": [ 00:15:26.070 { 00:15:26.070 "name": null, 00:15:26.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.070 "is_configured": false, 00:15:26.070 "data_offset": 0, 00:15:26.070 "data_size": 65536 00:15:26.070 }, 00:15:26.070 { 00:15:26.070 "name": "BaseBdev2", 00:15:26.070 "uuid": "e9a92388-b872-4732-b316-e59fa1b2a57c", 00:15:26.070 "is_configured": true, 00:15:26.070 "data_offset": 0, 00:15:26.070 "data_size": 65536 00:15:26.070 }, 00:15:26.070 { 00:15:26.070 "name": "BaseBdev3", 00:15:26.070 "uuid": "cc47d856-43ea-4f94-be6b-07d05cdc4845", 00:15:26.070 "is_configured": true, 00:15:26.070 "data_offset": 0, 00:15:26.070 "data_size": 65536 00:15:26.070 } 00:15:26.070 ] 00:15:26.070 }' 00:15:26.070 21:11:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:26.070 21:11:48 -- common/autotest_common.sh@10 -- # set +x 00:15:26.637 21:11:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:26.637 21:11:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:26.637 21:11:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.637 21:11:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:26.895 21:11:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:26.895 21:11:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:26.895 21:11:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:27.153 [2024-06-07 21:11:49.689210] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.153 21:11:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:27.153 21:11:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:27.153 21:11:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.153 21:11:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:27.411 21:11:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:27.411 21:11:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.411 21:11:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:27.669 [2024-06-07 21:11:50.195680] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:27.669 [2024-06-07 21:11:50.195887] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:15:27.669 21:11:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:27.669 21:11:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:27.669 21:11:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.669 21:11:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:27.928 21:11:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:27.928 21:11:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:27.928 21:11:50 -- bdev/bdev_raid.sh@287 -- # killprocess 129212 00:15:27.928 21:11:50 -- common/autotest_common.sh@926 -- # '[' -z 129212 ']' 00:15:27.928 21:11:50 -- common/autotest_common.sh@930 -- # kill -0 129212 00:15:27.928 21:11:50 -- common/autotest_common.sh@931 -- # uname 00:15:27.928 21:11:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:27.928 21:11:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129212 00:15:27.928 killing process with pid 129212 00:15:27.928 21:11:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:27.928 21:11:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:27.928 21:11:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129212' 00:15:27.928 21:11:50 -- common/autotest_common.sh@945 -- # kill 129212 00:15:27.928 21:11:50 -- common/autotest_common.sh@950 -- # wait 129212 00:15:27.928 [2024-06-07 21:11:50.450421] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.928 [2024-06-07 21:11:50.450559] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:28.187 00:15:28.187 real 0m11.189s 00:15:28.187 user 0m20.836s 00:15:28.187 sys 0m1.307s 00:15:28.187 21:11:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.187 ************************************ 00:15:28.187 END TEST raid_state_function_test 00:15:28.187 21:11:50 -- common/autotest_common.sh@10 -- # set +x 00:15:28.187 ************************************ 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:15:28.187 21:11:50 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:28.187 21:11:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:28.187 21:11:50 -- common/autotest_common.sh@10 -- # set +x 00:15:28.187 ************************************ 00:15:28.187 START TEST raid_state_function_test_sb 00:15:28.187 ************************************ 00:15:28.187 21:11:50 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:28.187 Process raid pid: 129615 00:15:28.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@226 -- # raid_pid=129615 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129615' 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129615 /var/tmp/spdk-raid.sock 00:15:28.187 21:11:50 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:28.187 21:11:50 -- common/autotest_common.sh@819 -- # '[' -z 129615 ']' 00:15:28.187 21:11:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:28.187 21:11:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:28.187 21:11:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:28.187 21:11:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:28.187 21:11:50 -- common/autotest_common.sh@10 -- # set +x 00:15:28.187 [2024-06-07 21:11:50.783090] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:28.187 [2024-06-07 21:11:50.783534] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.446 [2024-06-07 21:11:50.949709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.446 [2024-06-07 21:11:51.019240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.446 [2024-06-07 21:11:51.073971] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.013 21:11:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:29.013 21:11:51 -- common/autotest_common.sh@852 -- # return 0 00:15:29.013 21:11:51 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:29.298 [2024-06-07 21:11:51.874651] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:29.298 [2024-06-07 21:11:51.874966] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:29.298 [2024-06-07 21:11:51.875104] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.298 [2024-06-07 21:11:51.875165] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.298 [2024-06-07 21:11:51.875379] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:29.298 [2024-06-07 21:11:51.875463] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:29.298 21:11:51 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:29.298 21:11:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:29.298 21:11:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:29.298 21:11:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:29.298 21:11:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:29.298 21:11:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:29.298 21:11:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.298 21:11:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.298 21:11:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.298 21:11:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.298 21:11:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.298 21:11:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.556 21:11:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:29.556 "name": "Existed_Raid", 00:15:29.556 "uuid": "10584735-7f30-4951-8df8-6c32de78451b", 00:15:29.556 "strip_size_kb": 64, 00:15:29.556 "state": "configuring", 00:15:29.556 "raid_level": "concat", 00:15:29.556 "superblock": true, 00:15:29.556 "num_base_bdevs": 3, 00:15:29.556 "num_base_bdevs_discovered": 0, 00:15:29.556 "num_base_bdevs_operational": 3, 00:15:29.556 "base_bdevs_list": [ 00:15:29.556 { 00:15:29.556 "name": "BaseBdev1", 00:15:29.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.556 "is_configured": false, 00:15:29.556 "data_offset": 0, 00:15:29.556 "data_size": 0 00:15:29.556 }, 00:15:29.556 { 00:15:29.556 "name": "BaseBdev2", 00:15:29.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.556 "is_configured": false, 00:15:29.556 "data_offset": 0, 00:15:29.556 "data_size": 0 00:15:29.556 }, 00:15:29.556 { 00:15:29.556 "name": "BaseBdev3", 00:15:29.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.556 "is_configured": false, 00:15:29.556 "data_offset": 0, 00:15:29.556 "data_size": 0 00:15:29.556 } 00:15:29.556 ] 00:15:29.556 }' 00:15:29.556 21:11:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:29.556 21:11:52 -- common/autotest_common.sh@10 -- # set +x 00:15:30.124 21:11:52 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:30.382 [2024-06-07 21:11:52.914729] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.382 [2024-06-07 21:11:52.914994] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:30.382 21:11:52 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:30.640 [2024-06-07 21:11:53.122781] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.640 [2024-06-07 21:11:53.122999] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.640 [2024-06-07 21:11:53.123109] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.640 [2024-06-07 21:11:53.123166] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.640 [2024-06-07 21:11:53.123351] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:30.640 [2024-06-07 21:11:53.123419] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:30.640 21:11:53 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:30.897 [2024-06-07 21:11:53.333855] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.897 BaseBdev1 00:15:30.897 21:11:53 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:30.897 21:11:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:30.897 21:11:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:30.897 21:11:53 -- common/autotest_common.sh@889 -- # local i 00:15:30.897 21:11:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:30.897 21:11:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:30.897 21:11:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.897 21:11:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:31.156 [ 00:15:31.156 { 00:15:31.156 "name": "BaseBdev1", 00:15:31.156 "aliases": [ 00:15:31.156 "df31342a-4f5d-4ad5-a334-52c5863416b4" 00:15:31.156 ], 00:15:31.156 "product_name": "Malloc disk", 00:15:31.156 "block_size": 512, 00:15:31.156 "num_blocks": 65536, 00:15:31.156 "uuid": "df31342a-4f5d-4ad5-a334-52c5863416b4", 00:15:31.156 "assigned_rate_limits": { 00:15:31.156 "rw_ios_per_sec": 0, 00:15:31.156 "rw_mbytes_per_sec": 0, 00:15:31.156 "r_mbytes_per_sec": 0, 00:15:31.156 "w_mbytes_per_sec": 0 00:15:31.156 }, 00:15:31.156 "claimed": true, 00:15:31.156 "claim_type": "exclusive_write", 00:15:31.156 "zoned": false, 00:15:31.156 "supported_io_types": { 00:15:31.156 "read": true, 00:15:31.156 "write": true, 00:15:31.156 "unmap": true, 00:15:31.156 "write_zeroes": true, 00:15:31.156 "flush": true, 00:15:31.156 "reset": true, 00:15:31.156 "compare": false, 00:15:31.156 "compare_and_write": false, 00:15:31.156 "abort": true, 00:15:31.156 "nvme_admin": false, 00:15:31.156 "nvme_io": false 00:15:31.156 }, 00:15:31.156 "memory_domains": [ 00:15:31.156 { 00:15:31.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.156 "dma_device_type": 2 00:15:31.156 } 00:15:31.156 ], 00:15:31.156 "driver_specific": {} 00:15:31.156 } 00:15:31.156 ] 00:15:31.156 21:11:53 -- common/autotest_common.sh@895 -- # return 0 00:15:31.156 21:11:53 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:31.156 21:11:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:31.156 21:11:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:31.156 21:11:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:31.156 21:11:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:31.156 21:11:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:31.156 21:11:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:31.156 21:11:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:31.156 21:11:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:31.156 21:11:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:31.156 21:11:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.156 21:11:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.414 21:11:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:31.414 "name": "Existed_Raid", 00:15:31.414 "uuid": "e62ff57c-95df-41c8-b22b-5603a08604a6", 00:15:31.414 "strip_size_kb": 64, 00:15:31.414 "state": "configuring", 00:15:31.414 "raid_level": "concat", 00:15:31.414 "superblock": true, 00:15:31.414 "num_base_bdevs": 3, 00:15:31.414 "num_base_bdevs_discovered": 1, 00:15:31.414 "num_base_bdevs_operational": 3, 00:15:31.414 "base_bdevs_list": [ 00:15:31.414 { 00:15:31.414 "name": "BaseBdev1", 00:15:31.414 "uuid": "df31342a-4f5d-4ad5-a334-52c5863416b4", 00:15:31.414 "is_configured": true, 00:15:31.414 "data_offset": 2048, 00:15:31.414 "data_size": 63488 00:15:31.414 }, 00:15:31.414 { 00:15:31.414 "name": "BaseBdev2", 00:15:31.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.414 "is_configured": false, 00:15:31.414 "data_offset": 0, 00:15:31.414 "data_size": 0 00:15:31.414 }, 00:15:31.414 { 00:15:31.414 "name": "BaseBdev3", 00:15:31.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.414 "is_configured": false, 00:15:31.414 "data_offset": 0, 00:15:31.414 "data_size": 0 00:15:31.414 } 00:15:31.414 ] 00:15:31.414 }' 00:15:31.414 21:11:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:31.414 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:15:31.980 21:11:54 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:32.238 [2024-06-07 21:11:54.774223] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.239 [2024-06-07 21:11:54.774488] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:32.239 21:11:54 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:32.239 21:11:54 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:32.497 21:11:55 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:32.755 BaseBdev1 00:15:32.755 21:11:55 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:32.755 21:11:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:32.755 21:11:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:32.755 21:11:55 -- common/autotest_common.sh@889 -- # local i 00:15:32.755 21:11:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:32.755 21:11:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:32.755 21:11:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.014 21:11:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:33.014 [ 00:15:33.014 { 00:15:33.014 "name": "BaseBdev1", 00:15:33.014 "aliases": [ 00:15:33.014 "a2b5515f-1585-44b7-884b-c09678087400" 00:15:33.014 ], 00:15:33.014 "product_name": "Malloc disk", 00:15:33.014 "block_size": 512, 00:15:33.014 "num_blocks": 65536, 00:15:33.014 "uuid": "a2b5515f-1585-44b7-884b-c09678087400", 00:15:33.014 "assigned_rate_limits": { 00:15:33.014 "rw_ios_per_sec": 0, 00:15:33.014 "rw_mbytes_per_sec": 0, 00:15:33.014 "r_mbytes_per_sec": 0, 00:15:33.014 "w_mbytes_per_sec": 0 00:15:33.014 }, 00:15:33.014 "claimed": false, 00:15:33.014 "zoned": false, 00:15:33.014 "supported_io_types": { 00:15:33.014 "read": true, 00:15:33.014 "write": true, 00:15:33.014 "unmap": true, 00:15:33.014 "write_zeroes": true, 00:15:33.014 "flush": true, 00:15:33.014 "reset": true, 00:15:33.014 "compare": false, 00:15:33.014 "compare_and_write": false, 00:15:33.014 "abort": true, 00:15:33.014 "nvme_admin": false, 00:15:33.014 "nvme_io": false 00:15:33.014 }, 00:15:33.014 "memory_domains": [ 00:15:33.014 { 00:15:33.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.014 "dma_device_type": 2 00:15:33.014 } 00:15:33.014 ], 00:15:33.014 "driver_specific": {} 00:15:33.014 } 00:15:33.014 ] 00:15:33.014 21:11:55 -- common/autotest_common.sh@895 -- # return 0 00:15:33.014 21:11:55 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:33.273 [2024-06-07 21:11:55.875721] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.273 [2024-06-07 21:11:55.878148] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.273 [2024-06-07 21:11:55.878351] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.273 [2024-06-07 21:11:55.878481] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.273 [2024-06-07 21:11:55.878546] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.273 21:11:55 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:33.273 21:11:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:33.273 21:11:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:33.273 21:11:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:33.273 21:11:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:33.273 21:11:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:33.273 21:11:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:33.273 21:11:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:33.273 21:11:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:33.273 21:11:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:33.273 21:11:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:33.273 21:11:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:33.273 21:11:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.273 21:11:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.531 21:11:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:33.531 "name": "Existed_Raid", 00:15:33.531 "uuid": "6724b422-19d9-4803-bbbc-c0b5825452d7", 00:15:33.531 "strip_size_kb": 64, 00:15:33.531 "state": "configuring", 00:15:33.531 "raid_level": "concat", 00:15:33.531 "superblock": true, 00:15:33.531 "num_base_bdevs": 3, 00:15:33.531 "num_base_bdevs_discovered": 1, 00:15:33.531 "num_base_bdevs_operational": 3, 00:15:33.531 "base_bdevs_list": [ 00:15:33.531 { 00:15:33.531 "name": "BaseBdev1", 00:15:33.531 "uuid": "a2b5515f-1585-44b7-884b-c09678087400", 00:15:33.531 "is_configured": true, 00:15:33.531 "data_offset": 2048, 00:15:33.531 "data_size": 63488 00:15:33.531 }, 00:15:33.531 { 00:15:33.531 "name": "BaseBdev2", 00:15:33.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.531 "is_configured": false, 00:15:33.531 "data_offset": 0, 00:15:33.531 "data_size": 0 00:15:33.531 }, 00:15:33.531 { 00:15:33.531 "name": "BaseBdev3", 00:15:33.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.531 "is_configured": false, 00:15:33.531 "data_offset": 0, 00:15:33.531 "data_size": 0 00:15:33.531 } 00:15:33.531 ] 00:15:33.531 }' 00:15:33.531 21:11:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:33.531 21:11:56 -- common/autotest_common.sh@10 -- # set +x 00:15:34.463 21:11:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:34.463 [2024-06-07 21:11:57.009145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.463 BaseBdev2 00:15:34.464 21:11:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:34.464 21:11:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:34.464 21:11:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:34.464 21:11:57 -- common/autotest_common.sh@889 -- # local i 00:15:34.464 21:11:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:34.464 21:11:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:34.464 21:11:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:34.721 21:11:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:34.978 [ 00:15:34.978 { 00:15:34.978 "name": "BaseBdev2", 00:15:34.978 "aliases": [ 00:15:34.978 "b70a918b-f628-4fba-a79a-41950cb7f739" 00:15:34.978 ], 00:15:34.978 "product_name": "Malloc disk", 00:15:34.978 "block_size": 512, 00:15:34.978 "num_blocks": 65536, 00:15:34.978 "uuid": "b70a918b-f628-4fba-a79a-41950cb7f739", 00:15:34.978 "assigned_rate_limits": { 00:15:34.978 "rw_ios_per_sec": 0, 00:15:34.978 "rw_mbytes_per_sec": 0, 00:15:34.978 "r_mbytes_per_sec": 0, 00:15:34.978 "w_mbytes_per_sec": 0 00:15:34.978 }, 00:15:34.978 "claimed": true, 00:15:34.978 "claim_type": "exclusive_write", 00:15:34.978 "zoned": false, 00:15:34.978 "supported_io_types": { 00:15:34.978 "read": true, 00:15:34.978 "write": true, 00:15:34.978 "unmap": true, 00:15:34.978 "write_zeroes": true, 00:15:34.978 "flush": true, 00:15:34.978 "reset": true, 00:15:34.978 "compare": false, 00:15:34.978 "compare_and_write": false, 00:15:34.978 "abort": true, 00:15:34.978 "nvme_admin": false, 00:15:34.978 "nvme_io": false 00:15:34.978 }, 00:15:34.978 "memory_domains": [ 00:15:34.978 { 00:15:34.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.979 "dma_device_type": 2 00:15:34.979 } 00:15:34.979 ], 00:15:34.979 "driver_specific": {} 00:15:34.979 } 00:15:34.979 ] 00:15:34.979 21:11:57 -- common/autotest_common.sh@895 -- # return 0 00:15:34.979 21:11:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:34.979 21:11:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:34.979 21:11:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:34.979 21:11:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:34.979 21:11:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:34.979 21:11:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:34.979 21:11:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:34.979 21:11:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:34.979 21:11:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:34.979 21:11:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:34.979 21:11:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:34.979 21:11:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:34.979 21:11:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.979 21:11:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.236 21:11:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:35.236 "name": "Existed_Raid", 00:15:35.237 "uuid": "6724b422-19d9-4803-bbbc-c0b5825452d7", 00:15:35.237 "strip_size_kb": 64, 00:15:35.237 "state": "configuring", 00:15:35.237 "raid_level": "concat", 00:15:35.237 "superblock": true, 00:15:35.237 "num_base_bdevs": 3, 00:15:35.237 "num_base_bdevs_discovered": 2, 00:15:35.237 "num_base_bdevs_operational": 3, 00:15:35.237 "base_bdevs_list": [ 00:15:35.237 { 00:15:35.237 "name": "BaseBdev1", 00:15:35.237 "uuid": "a2b5515f-1585-44b7-884b-c09678087400", 00:15:35.237 "is_configured": true, 00:15:35.237 "data_offset": 2048, 00:15:35.237 "data_size": 63488 00:15:35.237 }, 00:15:35.237 { 00:15:35.237 "name": "BaseBdev2", 00:15:35.237 "uuid": "b70a918b-f628-4fba-a79a-41950cb7f739", 00:15:35.237 "is_configured": true, 00:15:35.237 "data_offset": 2048, 00:15:35.237 "data_size": 63488 00:15:35.237 }, 00:15:35.237 { 00:15:35.237 "name": "BaseBdev3", 00:15:35.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.237 "is_configured": false, 00:15:35.237 "data_offset": 0, 00:15:35.237 "data_size": 0 00:15:35.237 } 00:15:35.237 ] 00:15:35.237 }' 00:15:35.237 21:11:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:35.237 21:11:57 -- common/autotest_common.sh@10 -- # set +x 00:15:35.807 21:11:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:36.072 [2024-06-07 21:11:58.610288] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.072 [2024-06-07 21:11:58.610741] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:36.072 [2024-06-07 21:11:58.610867] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:36.072 BaseBdev3 00:15:36.072 [2024-06-07 21:11:58.611050] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:36.072 [2024-06-07 21:11:58.611596] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:36.072 [2024-06-07 21:11:58.611755] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:15:36.072 [2024-06-07 21:11:58.612008] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.072 21:11:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:36.072 21:11:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:36.072 21:11:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:36.072 21:11:58 -- common/autotest_common.sh@889 -- # local i 00:15:36.072 21:11:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:36.072 21:11:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:36.072 21:11:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:36.330 21:11:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:36.588 [ 00:15:36.588 { 00:15:36.588 "name": "BaseBdev3", 00:15:36.588 "aliases": [ 00:15:36.588 "5d5d034b-38b1-487b-a528-89773057b2e6" 00:15:36.588 ], 00:15:36.588 "product_name": "Malloc disk", 00:15:36.588 "block_size": 512, 00:15:36.588 "num_blocks": 65536, 00:15:36.588 "uuid": "5d5d034b-38b1-487b-a528-89773057b2e6", 00:15:36.588 "assigned_rate_limits": { 00:15:36.588 "rw_ios_per_sec": 0, 00:15:36.588 "rw_mbytes_per_sec": 0, 00:15:36.588 "r_mbytes_per_sec": 0, 00:15:36.588 "w_mbytes_per_sec": 0 00:15:36.588 }, 00:15:36.588 "claimed": true, 00:15:36.588 "claim_type": "exclusive_write", 00:15:36.588 "zoned": false, 00:15:36.588 "supported_io_types": { 00:15:36.588 "read": true, 00:15:36.588 "write": true, 00:15:36.588 "unmap": true, 00:15:36.588 "write_zeroes": true, 00:15:36.588 "flush": true, 00:15:36.588 "reset": true, 00:15:36.588 "compare": false, 00:15:36.588 "compare_and_write": false, 00:15:36.588 "abort": true, 00:15:36.588 "nvme_admin": false, 00:15:36.588 "nvme_io": false 00:15:36.588 }, 00:15:36.588 "memory_domains": [ 00:15:36.588 { 00:15:36.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.588 "dma_device_type": 2 00:15:36.588 } 00:15:36.588 ], 00:15:36.588 "driver_specific": {} 00:15:36.588 } 00:15:36.588 ] 00:15:36.588 21:11:59 -- common/autotest_common.sh@895 -- # return 0 00:15:36.588 21:11:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:36.588 21:11:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:36.588 21:11:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:36.588 21:11:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:36.588 21:11:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:36.588 21:11:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:36.588 21:11:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:36.588 21:11:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:36.588 21:11:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:36.588 21:11:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:36.588 21:11:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:36.588 21:11:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:36.588 21:11:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.588 21:11:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.846 21:11:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:36.846 "name": "Existed_Raid", 00:15:36.846 "uuid": "6724b422-19d9-4803-bbbc-c0b5825452d7", 00:15:36.846 "strip_size_kb": 64, 00:15:36.846 "state": "online", 00:15:36.846 "raid_level": "concat", 00:15:36.846 "superblock": true, 00:15:36.846 "num_base_bdevs": 3, 00:15:36.846 "num_base_bdevs_discovered": 3, 00:15:36.846 "num_base_bdevs_operational": 3, 00:15:36.846 "base_bdevs_list": [ 00:15:36.846 { 00:15:36.846 "name": "BaseBdev1", 00:15:36.846 "uuid": "a2b5515f-1585-44b7-884b-c09678087400", 00:15:36.846 "is_configured": true, 00:15:36.846 "data_offset": 2048, 00:15:36.846 "data_size": 63488 00:15:36.846 }, 00:15:36.846 { 00:15:36.846 "name": "BaseBdev2", 00:15:36.846 "uuid": "b70a918b-f628-4fba-a79a-41950cb7f739", 00:15:36.846 "is_configured": true, 00:15:36.846 "data_offset": 2048, 00:15:36.846 "data_size": 63488 00:15:36.846 }, 00:15:36.846 { 00:15:36.846 "name": "BaseBdev3", 00:15:36.846 "uuid": "5d5d034b-38b1-487b-a528-89773057b2e6", 00:15:36.846 "is_configured": true, 00:15:36.846 "data_offset": 2048, 00:15:36.846 "data_size": 63488 00:15:36.846 } 00:15:36.846 ] 00:15:36.846 }' 00:15:36.846 21:11:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:36.846 21:11:59 -- common/autotest_common.sh@10 -- # set +x 00:15:37.412 21:11:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:37.671 [2024-06-07 21:12:00.177402] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:37.671 [2024-06-07 21:12:00.177726] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.671 [2024-06-07 21:12:00.177935] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.671 21:12:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.929 21:12:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.929 "name": "Existed_Raid", 00:15:37.929 "uuid": "6724b422-19d9-4803-bbbc-c0b5825452d7", 00:15:37.929 "strip_size_kb": 64, 00:15:37.929 "state": "offline", 00:15:37.929 "raid_level": "concat", 00:15:37.929 "superblock": true, 00:15:37.929 "num_base_bdevs": 3, 00:15:37.929 "num_base_bdevs_discovered": 2, 00:15:37.929 "num_base_bdevs_operational": 2, 00:15:37.929 "base_bdevs_list": [ 00:15:37.929 { 00:15:37.929 "name": null, 00:15:37.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.929 "is_configured": false, 00:15:37.929 "data_offset": 2048, 00:15:37.929 "data_size": 63488 00:15:37.929 }, 00:15:37.929 { 00:15:37.929 "name": "BaseBdev2", 00:15:37.929 "uuid": "b70a918b-f628-4fba-a79a-41950cb7f739", 00:15:37.929 "is_configured": true, 00:15:37.929 "data_offset": 2048, 00:15:37.929 "data_size": 63488 00:15:37.929 }, 00:15:37.929 { 00:15:37.929 "name": "BaseBdev3", 00:15:37.929 "uuid": "5d5d034b-38b1-487b-a528-89773057b2e6", 00:15:37.929 "is_configured": true, 00:15:37.929 "data_offset": 2048, 00:15:37.929 "data_size": 63488 00:15:37.929 } 00:15:37.929 ] 00:15:37.929 }' 00:15:37.929 21:12:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.929 21:12:00 -- common/autotest_common.sh@10 -- # set +x 00:15:38.860 21:12:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:38.860 21:12:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:38.860 21:12:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:38.860 21:12:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.860 21:12:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:38.860 21:12:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.860 21:12:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:39.117 [2024-06-07 21:12:01.696162] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:39.117 21:12:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:39.117 21:12:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:39.117 21:12:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.117 21:12:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:39.375 21:12:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:39.375 21:12:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:39.375 21:12:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:39.633 [2024-06-07 21:12:02.171161] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:39.633 [2024-06-07 21:12:02.171541] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:15:39.633 21:12:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:39.633 21:12:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:39.633 21:12:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.633 21:12:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:39.891 21:12:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:39.891 21:12:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:39.891 21:12:02 -- bdev/bdev_raid.sh@287 -- # killprocess 129615 00:15:39.891 21:12:02 -- common/autotest_common.sh@926 -- # '[' -z 129615 ']' 00:15:39.891 21:12:02 -- common/autotest_common.sh@930 -- # kill -0 129615 00:15:39.891 21:12:02 -- common/autotest_common.sh@931 -- # uname 00:15:39.891 21:12:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:39.891 21:12:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129615 00:15:39.891 21:12:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:39.891 killing process with pid 129615 00:15:39.891 21:12:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:39.891 21:12:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129615' 00:15:39.891 21:12:02 -- common/autotest_common.sh@945 -- # kill 129615 00:15:39.891 21:12:02 -- common/autotest_common.sh@950 -- # wait 129615 00:15:39.891 [2024-06-07 21:12:02.425706] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:39.891 [2024-06-07 21:12:02.425812] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:40.148 ************************************ 00:15:40.148 END TEST raid_state_function_test_sb 00:15:40.149 ************************************ 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:40.149 00:15:40.149 real 0m11.942s 00:15:40.149 user 0m22.128s 00:15:40.149 sys 0m1.447s 00:15:40.149 21:12:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.149 21:12:02 -- common/autotest_common.sh@10 -- # set +x 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:15:40.149 21:12:02 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:40.149 21:12:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:40.149 21:12:02 -- common/autotest_common.sh@10 -- # set +x 00:15:40.149 ************************************ 00:15:40.149 START TEST raid_superblock_test 00:15:40.149 ************************************ 00:15:40.149 21:12:02 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@357 -- # raid_pid=130014 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:40.149 21:12:02 -- bdev/bdev_raid.sh@358 -- # waitforlisten 130014 /var/tmp/spdk-raid.sock 00:15:40.149 21:12:02 -- common/autotest_common.sh@819 -- # '[' -z 130014 ']' 00:15:40.149 21:12:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:40.149 21:12:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:40.149 21:12:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:40.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:40.149 21:12:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:40.149 21:12:02 -- common/autotest_common.sh@10 -- # set +x 00:15:40.149 [2024-06-07 21:12:02.790149] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:40.149 [2024-06-07 21:12:02.790611] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130014 ] 00:15:40.407 [2024-06-07 21:12:02.957171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.407 [2024-06-07 21:12:03.037919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.665 [2024-06-07 21:12:03.090642] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.231 21:12:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:41.231 21:12:03 -- common/autotest_common.sh@852 -- # return 0 00:15:41.231 21:12:03 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:41.231 21:12:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:41.231 21:12:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:41.231 21:12:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:41.231 21:12:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:41.231 21:12:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:41.231 21:12:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:41.231 21:12:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:41.231 21:12:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:41.489 malloc1 00:15:41.489 21:12:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:41.489 [2024-06-07 21:12:04.138500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:41.489 [2024-06-07 21:12:04.138916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.489 [2024-06-07 21:12:04.139143] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:41.489 [2024-06-07 21:12:04.139320] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.489 [2024-06-07 21:12:04.142160] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.489 [2024-06-07 21:12:04.142348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:41.489 pt1 00:15:41.489 21:12:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:41.489 21:12:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:41.489 21:12:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:41.489 21:12:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:41.489 21:12:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:41.489 21:12:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:41.489 21:12:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:41.489 21:12:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:41.489 21:12:04 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:41.747 malloc2 00:15:41.747 21:12:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:42.005 [2024-06-07 21:12:04.537185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:42.005 [2024-06-07 21:12:04.537580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.005 [2024-06-07 21:12:04.537662] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:42.005 [2024-06-07 21:12:04.537953] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.005 [2024-06-07 21:12:04.540351] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.005 [2024-06-07 21:12:04.540529] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:42.005 pt2 00:15:42.005 21:12:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:42.005 21:12:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:42.005 21:12:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:42.005 21:12:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:42.005 21:12:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:42.005 21:12:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:42.005 21:12:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:42.005 21:12:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:42.005 21:12:04 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:42.275 malloc3 00:15:42.275 21:12:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:42.565 [2024-06-07 21:12:04.973385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:42.565 [2024-06-07 21:12:04.973732] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.565 [2024-06-07 21:12:04.973973] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:42.565 [2024-06-07 21:12:04.974174] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.565 [2024-06-07 21:12:04.977577] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.565 [2024-06-07 21:12:04.977802] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:42.565 pt3 00:15:42.565 21:12:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:42.565 21:12:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:42.565 21:12:04 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:42.565 [2024-06-07 21:12:05.182268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:42.565 [2024-06-07 21:12:05.184446] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:42.565 [2024-06-07 21:12:05.184697] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:42.565 [2024-06-07 21:12:05.185036] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:15:42.565 [2024-06-07 21:12:05.185183] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:42.565 [2024-06-07 21:12:05.185391] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:15:42.565 [2024-06-07 21:12:05.185836] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:15:42.565 [2024-06-07 21:12:05.185960] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:15:42.565 [2024-06-07 21:12:05.186267] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.565 21:12:05 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:42.565 21:12:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:42.565 21:12:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:42.565 21:12:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:42.565 21:12:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:42.565 21:12:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:42.565 21:12:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:42.565 21:12:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:42.565 21:12:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:42.565 21:12:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:42.565 21:12:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.565 21:12:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.823 21:12:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:42.823 "name": "raid_bdev1", 00:15:42.823 "uuid": "7128365f-d7c1-480d-813a-f19faaeb2cb4", 00:15:42.823 "strip_size_kb": 64, 00:15:42.823 "state": "online", 00:15:42.823 "raid_level": "concat", 00:15:42.823 "superblock": true, 00:15:42.823 "num_base_bdevs": 3, 00:15:42.823 "num_base_bdevs_discovered": 3, 00:15:42.823 "num_base_bdevs_operational": 3, 00:15:42.823 "base_bdevs_list": [ 00:15:42.823 { 00:15:42.823 "name": "pt1", 00:15:42.823 "uuid": "9ac55033-9747-5d7f-9b77-d3de5bb45e26", 00:15:42.823 "is_configured": true, 00:15:42.823 "data_offset": 2048, 00:15:42.823 "data_size": 63488 00:15:42.823 }, 00:15:42.823 { 00:15:42.823 "name": "pt2", 00:15:42.823 "uuid": "40589e6a-7bda-5826-b025-89928161f189", 00:15:42.823 "is_configured": true, 00:15:42.823 "data_offset": 2048, 00:15:42.823 "data_size": 63488 00:15:42.823 }, 00:15:42.823 { 00:15:42.823 "name": "pt3", 00:15:42.823 "uuid": "9b695cab-3f15-53cf-ba29-e9efaa21186e", 00:15:42.823 "is_configured": true, 00:15:42.823 "data_offset": 2048, 00:15:42.823 "data_size": 63488 00:15:42.823 } 00:15:42.823 ] 00:15:42.823 }' 00:15:42.823 21:12:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:42.823 21:12:05 -- common/autotest_common.sh@10 -- # set +x 00:15:43.756 21:12:06 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:43.756 21:12:06 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:43.756 [2024-06-07 21:12:06.278678] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.756 21:12:06 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=7128365f-d7c1-480d-813a-f19faaeb2cb4 00:15:43.756 21:12:06 -- bdev/bdev_raid.sh@380 -- # '[' -z 7128365f-d7c1-480d-813a-f19faaeb2cb4 ']' 00:15:43.756 21:12:06 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:44.013 [2024-06-07 21:12:06.530480] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.013 [2024-06-07 21:12:06.530705] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.013 [2024-06-07 21:12:06.530971] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.013 [2024-06-07 21:12:06.531203] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.013 [2024-06-07 21:12:06.531318] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:15:44.013 21:12:06 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.013 21:12:06 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:44.271 21:12:06 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:44.271 21:12:06 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:44.271 21:12:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.271 21:12:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:44.529 21:12:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.529 21:12:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:44.529 21:12:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.529 21:12:07 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:44.787 21:12:07 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:44.787 21:12:07 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:45.115 21:12:07 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:45.116 21:12:07 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:45.116 21:12:07 -- common/autotest_common.sh@640 -- # local es=0 00:15:45.116 21:12:07 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:45.116 21:12:07 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.116 21:12:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:45.116 21:12:07 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.116 21:12:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:45.116 21:12:07 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.116 21:12:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:45.116 21:12:07 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.116 21:12:07 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:45.116 21:12:07 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:45.374 [2024-06-07 21:12:07.842828] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:45.374 [2024-06-07 21:12:07.845224] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:45.374 [2024-06-07 21:12:07.845452] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:45.374 [2024-06-07 21:12:07.845563] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:45.374 [2024-06-07 21:12:07.845826] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:45.374 [2024-06-07 21:12:07.845982] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:45.374 [2024-06-07 21:12:07.846161] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:45.374 [2024-06-07 21:12:07.846204] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:15:45.374 request: 00:15:45.374 { 00:15:45.374 "name": "raid_bdev1", 00:15:45.374 "raid_level": "concat", 00:15:45.374 "base_bdevs": [ 00:15:45.374 "malloc1", 00:15:45.374 "malloc2", 00:15:45.374 "malloc3" 00:15:45.374 ], 00:15:45.374 "superblock": false, 00:15:45.374 "strip_size_kb": 64, 00:15:45.374 "method": "bdev_raid_create", 00:15:45.374 "req_id": 1 00:15:45.374 } 00:15:45.374 Got JSON-RPC error response 00:15:45.374 response: 00:15:45.374 { 00:15:45.374 "code": -17, 00:15:45.374 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:45.374 } 00:15:45.374 21:12:07 -- common/autotest_common.sh@643 -- # es=1 00:15:45.374 21:12:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:45.374 21:12:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:45.374 21:12:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:45.374 21:12:07 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.374 21:12:07 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:45.634 21:12:08 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:45.634 21:12:08 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:45.634 21:12:08 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:45.634 [2024-06-07 21:12:08.250898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:45.634 [2024-06-07 21:12:08.251272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.634 [2024-06-07 21:12:08.251347] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:45.634 [2024-06-07 21:12:08.251469] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.634 [2024-06-07 21:12:08.253849] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.634 [2024-06-07 21:12:08.254015] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:45.634 [2024-06-07 21:12:08.254231] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:45.634 [2024-06-07 21:12:08.254389] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:45.634 pt1 00:15:45.634 21:12:08 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:45.634 21:12:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:45.634 21:12:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:45.634 21:12:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:45.634 21:12:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:45.634 21:12:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:45.634 21:12:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.634 21:12:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.634 21:12:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.634 21:12:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.634 21:12:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.634 21:12:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.893 21:12:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:45.893 "name": "raid_bdev1", 00:15:45.893 "uuid": "7128365f-d7c1-480d-813a-f19faaeb2cb4", 00:15:45.893 "strip_size_kb": 64, 00:15:45.893 "state": "configuring", 00:15:45.893 "raid_level": "concat", 00:15:45.893 "superblock": true, 00:15:45.893 "num_base_bdevs": 3, 00:15:45.893 "num_base_bdevs_discovered": 1, 00:15:45.893 "num_base_bdevs_operational": 3, 00:15:45.893 "base_bdevs_list": [ 00:15:45.893 { 00:15:45.893 "name": "pt1", 00:15:45.893 "uuid": "9ac55033-9747-5d7f-9b77-d3de5bb45e26", 00:15:45.893 "is_configured": true, 00:15:45.893 "data_offset": 2048, 00:15:45.893 "data_size": 63488 00:15:45.893 }, 00:15:45.893 { 00:15:45.893 "name": null, 00:15:45.893 "uuid": "40589e6a-7bda-5826-b025-89928161f189", 00:15:45.893 "is_configured": false, 00:15:45.893 "data_offset": 2048, 00:15:45.893 "data_size": 63488 00:15:45.893 }, 00:15:45.893 { 00:15:45.893 "name": null, 00:15:45.893 "uuid": "9b695cab-3f15-53cf-ba29-e9efaa21186e", 00:15:45.893 "is_configured": false, 00:15:45.894 "data_offset": 2048, 00:15:45.894 "data_size": 63488 00:15:45.894 } 00:15:45.894 ] 00:15:45.894 }' 00:15:45.894 21:12:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:45.894 21:12:08 -- common/autotest_common.sh@10 -- # set +x 00:15:46.460 21:12:09 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:15:46.461 21:12:09 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:46.719 [2024-06-07 21:12:09.303254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:46.719 [2024-06-07 21:12:09.303608] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.719 [2024-06-07 21:12:09.303690] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:46.719 [2024-06-07 21:12:09.303817] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.719 [2024-06-07 21:12:09.304351] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.719 [2024-06-07 21:12:09.304489] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:46.719 [2024-06-07 21:12:09.304697] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:46.719 [2024-06-07 21:12:09.304834] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:46.719 pt2 00:15:46.719 21:12:09 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:46.976 [2024-06-07 21:12:09.563367] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:46.976 21:12:09 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:46.976 21:12:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:46.976 21:12:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:46.976 21:12:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:46.976 21:12:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:46.976 21:12:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:46.976 21:12:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:46.976 21:12:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:46.976 21:12:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:46.976 21:12:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:46.976 21:12:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.976 21:12:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.232 21:12:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:47.232 "name": "raid_bdev1", 00:15:47.232 "uuid": "7128365f-d7c1-480d-813a-f19faaeb2cb4", 00:15:47.232 "strip_size_kb": 64, 00:15:47.232 "state": "configuring", 00:15:47.232 "raid_level": "concat", 00:15:47.232 "superblock": true, 00:15:47.232 "num_base_bdevs": 3, 00:15:47.232 "num_base_bdevs_discovered": 1, 00:15:47.232 "num_base_bdevs_operational": 3, 00:15:47.232 "base_bdevs_list": [ 00:15:47.232 { 00:15:47.232 "name": "pt1", 00:15:47.232 "uuid": "9ac55033-9747-5d7f-9b77-d3de5bb45e26", 00:15:47.232 "is_configured": true, 00:15:47.232 "data_offset": 2048, 00:15:47.232 "data_size": 63488 00:15:47.232 }, 00:15:47.232 { 00:15:47.232 "name": null, 00:15:47.232 "uuid": "40589e6a-7bda-5826-b025-89928161f189", 00:15:47.232 "is_configured": false, 00:15:47.232 "data_offset": 2048, 00:15:47.232 "data_size": 63488 00:15:47.232 }, 00:15:47.232 { 00:15:47.232 "name": null, 00:15:47.232 "uuid": "9b695cab-3f15-53cf-ba29-e9efaa21186e", 00:15:47.232 "is_configured": false, 00:15:47.232 "data_offset": 2048, 00:15:47.232 "data_size": 63488 00:15:47.232 } 00:15:47.232 ] 00:15:47.232 }' 00:15:47.232 21:12:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:47.232 21:12:09 -- common/autotest_common.sh@10 -- # set +x 00:15:48.188 21:12:10 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:48.188 21:12:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:48.188 21:12:10 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.188 [2024-06-07 21:12:10.747564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.188 [2024-06-07 21:12:10.747853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.188 [2024-06-07 21:12:10.747934] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:48.188 [2024-06-07 21:12:10.748169] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.188 [2024-06-07 21:12:10.748706] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.188 [2024-06-07 21:12:10.748874] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.188 [2024-06-07 21:12:10.749122] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:48.188 [2024-06-07 21:12:10.749289] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.188 pt2 00:15:48.188 21:12:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:48.188 21:12:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:48.188 21:12:10 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:48.447 [2024-06-07 21:12:11.019685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:48.447 [2024-06-07 21:12:11.019984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.447 [2024-06-07 21:12:11.020143] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:48.447 [2024-06-07 21:12:11.020266] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.447 [2024-06-07 21:12:11.020850] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.447 [2024-06-07 21:12:11.021068] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:48.447 [2024-06-07 21:12:11.021288] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:48.447 [2024-06-07 21:12:11.021426] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:48.447 [2024-06-07 21:12:11.021698] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:15:48.447 [2024-06-07 21:12:11.021813] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:48.447 [2024-06-07 21:12:11.021959] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:48.447 [2024-06-07 21:12:11.022332] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:15:48.447 [2024-06-07 21:12:11.022491] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:15:48.447 [2024-06-07 21:12:11.022717] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.447 pt3 00:15:48.447 21:12:11 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:48.447 21:12:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:48.447 21:12:11 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:48.447 21:12:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:48.447 21:12:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:48.447 21:12:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:48.447 21:12:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:48.447 21:12:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:48.447 21:12:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:48.447 21:12:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:48.447 21:12:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:48.447 21:12:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:48.447 21:12:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.447 21:12:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.705 21:12:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:48.705 "name": "raid_bdev1", 00:15:48.705 "uuid": "7128365f-d7c1-480d-813a-f19faaeb2cb4", 00:15:48.705 "strip_size_kb": 64, 00:15:48.705 "state": "online", 00:15:48.705 "raid_level": "concat", 00:15:48.705 "superblock": true, 00:15:48.705 "num_base_bdevs": 3, 00:15:48.705 "num_base_bdevs_discovered": 3, 00:15:48.705 "num_base_bdevs_operational": 3, 00:15:48.705 "base_bdevs_list": [ 00:15:48.705 { 00:15:48.705 "name": "pt1", 00:15:48.705 "uuid": "9ac55033-9747-5d7f-9b77-d3de5bb45e26", 00:15:48.705 "is_configured": true, 00:15:48.705 "data_offset": 2048, 00:15:48.705 "data_size": 63488 00:15:48.705 }, 00:15:48.705 { 00:15:48.705 "name": "pt2", 00:15:48.705 "uuid": "40589e6a-7bda-5826-b025-89928161f189", 00:15:48.705 "is_configured": true, 00:15:48.705 "data_offset": 2048, 00:15:48.705 "data_size": 63488 00:15:48.705 }, 00:15:48.705 { 00:15:48.705 "name": "pt3", 00:15:48.705 "uuid": "9b695cab-3f15-53cf-ba29-e9efaa21186e", 00:15:48.705 "is_configured": true, 00:15:48.705 "data_offset": 2048, 00:15:48.705 "data_size": 63488 00:15:48.705 } 00:15:48.705 ] 00:15:48.705 }' 00:15:48.705 21:12:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:48.705 21:12:11 -- common/autotest_common.sh@10 -- # set +x 00:15:49.641 21:12:11 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:49.641 21:12:11 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:49.641 [2024-06-07 21:12:12.240187] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.641 21:12:12 -- bdev/bdev_raid.sh@430 -- # '[' 7128365f-d7c1-480d-813a-f19faaeb2cb4 '!=' 7128365f-d7c1-480d-813a-f19faaeb2cb4 ']' 00:15:49.641 21:12:12 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:15:49.641 21:12:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:49.641 21:12:12 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:49.641 21:12:12 -- bdev/bdev_raid.sh@511 -- # killprocess 130014 00:15:49.641 21:12:12 -- common/autotest_common.sh@926 -- # '[' -z 130014 ']' 00:15:49.641 21:12:12 -- common/autotest_common.sh@930 -- # kill -0 130014 00:15:49.641 21:12:12 -- common/autotest_common.sh@931 -- # uname 00:15:49.641 21:12:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:49.641 21:12:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130014 00:15:49.641 killing process with pid 130014 00:15:49.641 21:12:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:49.641 21:12:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:49.641 21:12:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130014' 00:15:49.641 21:12:12 -- common/autotest_common.sh@945 -- # kill 130014 00:15:49.641 21:12:12 -- common/autotest_common.sh@950 -- # wait 130014 00:15:49.641 [2024-06-07 21:12:12.278395] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:49.641 [2024-06-07 21:12:12.278523] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.641 [2024-06-07 21:12:12.278620] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.641 [2024-06-07 21:12:12.278729] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:15:49.641 [2024-06-07 21:12:12.308404] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.899 ************************************ 00:15:49.899 END TEST raid_superblock_test 00:15:49.899 ************************************ 00:15:49.899 21:12:12 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:49.899 00:15:49.899 real 0m9.808s 00:15:49.899 user 0m17.926s 00:15:49.899 sys 0m1.226s 00:15:49.899 21:12:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.899 21:12:12 -- common/autotest_common.sh@10 -- # set +x 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:15:50.158 21:12:12 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:50.158 21:12:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:50.158 21:12:12 -- common/autotest_common.sh@10 -- # set +x 00:15:50.158 ************************************ 00:15:50.158 START TEST raid_state_function_test 00:15:50.158 ************************************ 00:15:50.158 21:12:12 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@226 -- # raid_pid=130324 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130324' 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:50.158 Process raid pid: 130324 00:15:50.158 21:12:12 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130324 /var/tmp/spdk-raid.sock 00:15:50.158 21:12:12 -- common/autotest_common.sh@819 -- # '[' -z 130324 ']' 00:15:50.158 21:12:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:50.158 21:12:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:50.158 21:12:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:50.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:50.158 21:12:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:50.158 21:12:12 -- common/autotest_common.sh@10 -- # set +x 00:15:50.158 [2024-06-07 21:12:12.649494] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:50.158 [2024-06-07 21:12:12.649916] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.158 [2024-06-07 21:12:12.817292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.416 [2024-06-07 21:12:12.903390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.417 [2024-06-07 21:12:12.957984] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.984 21:12:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:50.984 21:12:13 -- common/autotest_common.sh@852 -- # return 0 00:15:50.984 21:12:13 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:51.243 [2024-06-07 21:12:13.746621] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:51.243 [2024-06-07 21:12:13.746865] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:51.243 [2024-06-07 21:12:13.746983] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.243 [2024-06-07 21:12:13.747044] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.243 [2024-06-07 21:12:13.747234] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:51.243 [2024-06-07 21:12:13.747313] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:51.243 21:12:13 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:51.243 21:12:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:51.243 21:12:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:51.243 21:12:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:51.243 21:12:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:51.243 21:12:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:51.243 21:12:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.243 21:12:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.243 21:12:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.243 21:12:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.243 21:12:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.243 21:12:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.501 21:12:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.501 "name": "Existed_Raid", 00:15:51.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.501 "strip_size_kb": 0, 00:15:51.501 "state": "configuring", 00:15:51.501 "raid_level": "raid1", 00:15:51.501 "superblock": false, 00:15:51.501 "num_base_bdevs": 3, 00:15:51.501 "num_base_bdevs_discovered": 0, 00:15:51.501 "num_base_bdevs_operational": 3, 00:15:51.501 "base_bdevs_list": [ 00:15:51.501 { 00:15:51.501 "name": "BaseBdev1", 00:15:51.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.501 "is_configured": false, 00:15:51.501 "data_offset": 0, 00:15:51.501 "data_size": 0 00:15:51.501 }, 00:15:51.501 { 00:15:51.501 "name": "BaseBdev2", 00:15:51.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.501 "is_configured": false, 00:15:51.501 "data_offset": 0, 00:15:51.501 "data_size": 0 00:15:51.501 }, 00:15:51.501 { 00:15:51.501 "name": "BaseBdev3", 00:15:51.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.501 "is_configured": false, 00:15:51.501 "data_offset": 0, 00:15:51.501 "data_size": 0 00:15:51.501 } 00:15:51.501 ] 00:15:51.501 }' 00:15:51.502 21:12:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.502 21:12:13 -- common/autotest_common.sh@10 -- # set +x 00:15:52.069 21:12:14 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:52.328 [2024-06-07 21:12:14.854736] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.328 [2024-06-07 21:12:14.854933] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:52.328 21:12:14 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:52.586 [2024-06-07 21:12:15.062826] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:52.586 [2024-06-07 21:12:15.063160] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:52.586 [2024-06-07 21:12:15.063263] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.586 [2024-06-07 21:12:15.063387] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.586 [2024-06-07 21:12:15.063498] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:52.586 [2024-06-07 21:12:15.063570] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:52.586 21:12:15 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:52.846 [2024-06-07 21:12:15.277850] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.846 BaseBdev1 00:15:52.846 21:12:15 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:52.846 21:12:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:52.846 21:12:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:52.846 21:12:15 -- common/autotest_common.sh@889 -- # local i 00:15:52.846 21:12:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:52.846 21:12:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:52.846 21:12:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:52.846 21:12:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:53.107 [ 00:15:53.107 { 00:15:53.107 "name": "BaseBdev1", 00:15:53.107 "aliases": [ 00:15:53.107 "93728f93-e772-4f2c-9f33-23ff2443b5e7" 00:15:53.107 ], 00:15:53.107 "product_name": "Malloc disk", 00:15:53.107 "block_size": 512, 00:15:53.107 "num_blocks": 65536, 00:15:53.107 "uuid": "93728f93-e772-4f2c-9f33-23ff2443b5e7", 00:15:53.107 "assigned_rate_limits": { 00:15:53.107 "rw_ios_per_sec": 0, 00:15:53.107 "rw_mbytes_per_sec": 0, 00:15:53.107 "r_mbytes_per_sec": 0, 00:15:53.107 "w_mbytes_per_sec": 0 00:15:53.107 }, 00:15:53.107 "claimed": true, 00:15:53.107 "claim_type": "exclusive_write", 00:15:53.107 "zoned": false, 00:15:53.107 "supported_io_types": { 00:15:53.107 "read": true, 00:15:53.107 "write": true, 00:15:53.107 "unmap": true, 00:15:53.107 "write_zeroes": true, 00:15:53.107 "flush": true, 00:15:53.107 "reset": true, 00:15:53.107 "compare": false, 00:15:53.107 "compare_and_write": false, 00:15:53.107 "abort": true, 00:15:53.107 "nvme_admin": false, 00:15:53.107 "nvme_io": false 00:15:53.107 }, 00:15:53.107 "memory_domains": [ 00:15:53.107 { 00:15:53.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.107 "dma_device_type": 2 00:15:53.107 } 00:15:53.107 ], 00:15:53.107 "driver_specific": {} 00:15:53.107 } 00:15:53.107 ] 00:15:53.107 21:12:15 -- common/autotest_common.sh@895 -- # return 0 00:15:53.107 21:12:15 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:53.107 21:12:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:53.107 21:12:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:53.107 21:12:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:53.107 21:12:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:53.107 21:12:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:53.107 21:12:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:53.107 21:12:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:53.107 21:12:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:53.107 21:12:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:53.107 21:12:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.107 21:12:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.367 21:12:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:53.367 "name": "Existed_Raid", 00:15:53.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.368 "strip_size_kb": 0, 00:15:53.368 "state": "configuring", 00:15:53.368 "raid_level": "raid1", 00:15:53.368 "superblock": false, 00:15:53.368 "num_base_bdevs": 3, 00:15:53.368 "num_base_bdevs_discovered": 1, 00:15:53.368 "num_base_bdevs_operational": 3, 00:15:53.368 "base_bdevs_list": [ 00:15:53.368 { 00:15:53.368 "name": "BaseBdev1", 00:15:53.368 "uuid": "93728f93-e772-4f2c-9f33-23ff2443b5e7", 00:15:53.368 "is_configured": true, 00:15:53.368 "data_offset": 0, 00:15:53.368 "data_size": 65536 00:15:53.368 }, 00:15:53.368 { 00:15:53.368 "name": "BaseBdev2", 00:15:53.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.368 "is_configured": false, 00:15:53.368 "data_offset": 0, 00:15:53.368 "data_size": 0 00:15:53.368 }, 00:15:53.368 { 00:15:53.368 "name": "BaseBdev3", 00:15:53.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.368 "is_configured": false, 00:15:53.368 "data_offset": 0, 00:15:53.368 "data_size": 0 00:15:53.368 } 00:15:53.368 ] 00:15:53.368 }' 00:15:53.368 21:12:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:53.368 21:12:15 -- common/autotest_common.sh@10 -- # set +x 00:15:54.306 21:12:16 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:54.306 [2024-06-07 21:12:16.862295] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.306 [2024-06-07 21:12:16.862529] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:54.306 21:12:16 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:54.306 21:12:16 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:54.565 [2024-06-07 21:12:17.070362] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.565 [2024-06-07 21:12:17.072427] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.565 [2024-06-07 21:12:17.072639] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.565 [2024-06-07 21:12:17.072765] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:54.565 [2024-06-07 21:12:17.072827] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:54.565 21:12:17 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:54.565 21:12:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:54.565 21:12:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:54.565 21:12:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:54.565 21:12:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:54.565 21:12:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:54.565 21:12:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:54.565 21:12:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:54.565 21:12:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.565 21:12:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.565 21:12:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.565 21:12:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.565 21:12:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.565 21:12:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.824 21:12:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:54.824 "name": "Existed_Raid", 00:15:54.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.824 "strip_size_kb": 0, 00:15:54.824 "state": "configuring", 00:15:54.824 "raid_level": "raid1", 00:15:54.824 "superblock": false, 00:15:54.824 "num_base_bdevs": 3, 00:15:54.824 "num_base_bdevs_discovered": 1, 00:15:54.824 "num_base_bdevs_operational": 3, 00:15:54.824 "base_bdevs_list": [ 00:15:54.824 { 00:15:54.824 "name": "BaseBdev1", 00:15:54.824 "uuid": "93728f93-e772-4f2c-9f33-23ff2443b5e7", 00:15:54.824 "is_configured": true, 00:15:54.824 "data_offset": 0, 00:15:54.824 "data_size": 65536 00:15:54.824 }, 00:15:54.824 { 00:15:54.824 "name": "BaseBdev2", 00:15:54.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.824 "is_configured": false, 00:15:54.824 "data_offset": 0, 00:15:54.824 "data_size": 0 00:15:54.824 }, 00:15:54.824 { 00:15:54.824 "name": "BaseBdev3", 00:15:54.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.824 "is_configured": false, 00:15:54.824 "data_offset": 0, 00:15:54.824 "data_size": 0 00:15:54.824 } 00:15:54.824 ] 00:15:54.824 }' 00:15:54.824 21:12:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:54.824 21:12:17 -- common/autotest_common.sh@10 -- # set +x 00:15:55.390 21:12:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:55.648 [2024-06-07 21:12:18.288408] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.648 BaseBdev2 00:15:55.648 21:12:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:55.648 21:12:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:55.648 21:12:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:55.648 21:12:18 -- common/autotest_common.sh@889 -- # local i 00:15:55.648 21:12:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:55.648 21:12:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:55.648 21:12:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:55.906 21:12:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.164 [ 00:15:56.164 { 00:15:56.164 "name": "BaseBdev2", 00:15:56.164 "aliases": [ 00:15:56.164 "1eb29a3f-762a-4f39-89ca-5615a02c376e" 00:15:56.164 ], 00:15:56.164 "product_name": "Malloc disk", 00:15:56.164 "block_size": 512, 00:15:56.164 "num_blocks": 65536, 00:15:56.164 "uuid": "1eb29a3f-762a-4f39-89ca-5615a02c376e", 00:15:56.164 "assigned_rate_limits": { 00:15:56.164 "rw_ios_per_sec": 0, 00:15:56.164 "rw_mbytes_per_sec": 0, 00:15:56.164 "r_mbytes_per_sec": 0, 00:15:56.164 "w_mbytes_per_sec": 0 00:15:56.164 }, 00:15:56.164 "claimed": true, 00:15:56.164 "claim_type": "exclusive_write", 00:15:56.164 "zoned": false, 00:15:56.164 "supported_io_types": { 00:15:56.164 "read": true, 00:15:56.164 "write": true, 00:15:56.164 "unmap": true, 00:15:56.164 "write_zeroes": true, 00:15:56.164 "flush": true, 00:15:56.164 "reset": true, 00:15:56.164 "compare": false, 00:15:56.164 "compare_and_write": false, 00:15:56.164 "abort": true, 00:15:56.164 "nvme_admin": false, 00:15:56.164 "nvme_io": false 00:15:56.164 }, 00:15:56.164 "memory_domains": [ 00:15:56.164 { 00:15:56.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.164 "dma_device_type": 2 00:15:56.164 } 00:15:56.164 ], 00:15:56.164 "driver_specific": {} 00:15:56.164 } 00:15:56.164 ] 00:15:56.164 21:12:18 -- common/autotest_common.sh@895 -- # return 0 00:15:56.164 21:12:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:56.164 21:12:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:56.164 21:12:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:56.164 21:12:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:56.164 21:12:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:56.164 21:12:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:56.164 21:12:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:56.164 21:12:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:56.164 21:12:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:56.164 21:12:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:56.164 21:12:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:56.164 21:12:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:56.164 21:12:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.164 21:12:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.422 21:12:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:56.422 "name": "Existed_Raid", 00:15:56.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.422 "strip_size_kb": 0, 00:15:56.422 "state": "configuring", 00:15:56.422 "raid_level": "raid1", 00:15:56.422 "superblock": false, 00:15:56.422 "num_base_bdevs": 3, 00:15:56.422 "num_base_bdevs_discovered": 2, 00:15:56.422 "num_base_bdevs_operational": 3, 00:15:56.422 "base_bdevs_list": [ 00:15:56.422 { 00:15:56.422 "name": "BaseBdev1", 00:15:56.422 "uuid": "93728f93-e772-4f2c-9f33-23ff2443b5e7", 00:15:56.423 "is_configured": true, 00:15:56.423 "data_offset": 0, 00:15:56.423 "data_size": 65536 00:15:56.423 }, 00:15:56.423 { 00:15:56.423 "name": "BaseBdev2", 00:15:56.423 "uuid": "1eb29a3f-762a-4f39-89ca-5615a02c376e", 00:15:56.423 "is_configured": true, 00:15:56.423 "data_offset": 0, 00:15:56.423 "data_size": 65536 00:15:56.423 }, 00:15:56.423 { 00:15:56.423 "name": "BaseBdev3", 00:15:56.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.423 "is_configured": false, 00:15:56.423 "data_offset": 0, 00:15:56.423 "data_size": 0 00:15:56.423 } 00:15:56.423 ] 00:15:56.423 }' 00:15:56.423 21:12:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:56.423 21:12:18 -- common/autotest_common.sh@10 -- # set +x 00:15:56.988 21:12:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:57.246 [2024-06-07 21:12:19.769493] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.247 [2024-06-07 21:12:19.769825] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:57.247 [2024-06-07 21:12:19.769867] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:57.247 [2024-06-07 21:12:19.770149] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:57.247 [2024-06-07 21:12:19.770699] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:57.247 [2024-06-07 21:12:19.770835] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:57.247 [2024-06-07 21:12:19.771204] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.247 BaseBdev3 00:15:57.247 21:12:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:57.247 21:12:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:57.247 21:12:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:57.247 21:12:19 -- common/autotest_common.sh@889 -- # local i 00:15:57.247 21:12:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:57.247 21:12:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:57.247 21:12:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:57.504 21:12:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:57.762 [ 00:15:57.762 { 00:15:57.762 "name": "BaseBdev3", 00:15:57.762 "aliases": [ 00:15:57.762 "136f406b-eb83-45c6-8a62-e63e4c91bda2" 00:15:57.762 ], 00:15:57.762 "product_name": "Malloc disk", 00:15:57.762 "block_size": 512, 00:15:57.762 "num_blocks": 65536, 00:15:57.762 "uuid": "136f406b-eb83-45c6-8a62-e63e4c91bda2", 00:15:57.762 "assigned_rate_limits": { 00:15:57.762 "rw_ios_per_sec": 0, 00:15:57.762 "rw_mbytes_per_sec": 0, 00:15:57.762 "r_mbytes_per_sec": 0, 00:15:57.762 "w_mbytes_per_sec": 0 00:15:57.762 }, 00:15:57.762 "claimed": true, 00:15:57.762 "claim_type": "exclusive_write", 00:15:57.762 "zoned": false, 00:15:57.762 "supported_io_types": { 00:15:57.762 "read": true, 00:15:57.762 "write": true, 00:15:57.762 "unmap": true, 00:15:57.762 "write_zeroes": true, 00:15:57.762 "flush": true, 00:15:57.762 "reset": true, 00:15:57.762 "compare": false, 00:15:57.762 "compare_and_write": false, 00:15:57.762 "abort": true, 00:15:57.762 "nvme_admin": false, 00:15:57.762 "nvme_io": false 00:15:57.762 }, 00:15:57.762 "memory_domains": [ 00:15:57.762 { 00:15:57.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.762 "dma_device_type": 2 00:15:57.762 } 00:15:57.762 ], 00:15:57.762 "driver_specific": {} 00:15:57.762 } 00:15:57.762 ] 00:15:57.762 21:12:20 -- common/autotest_common.sh@895 -- # return 0 00:15:57.762 21:12:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:57.762 21:12:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:57.762 21:12:20 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:57.762 21:12:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:57.762 21:12:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:57.762 21:12:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:57.762 21:12:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:57.762 21:12:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:57.762 21:12:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:57.762 21:12:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:57.762 21:12:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:57.762 21:12:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:57.762 21:12:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.762 21:12:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.020 21:12:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:58.020 "name": "Existed_Raid", 00:15:58.020 "uuid": "a304d671-b971-4250-b6ed-38c215075f39", 00:15:58.020 "strip_size_kb": 0, 00:15:58.020 "state": "online", 00:15:58.020 "raid_level": "raid1", 00:15:58.020 "superblock": false, 00:15:58.020 "num_base_bdevs": 3, 00:15:58.020 "num_base_bdevs_discovered": 3, 00:15:58.020 "num_base_bdevs_operational": 3, 00:15:58.020 "base_bdevs_list": [ 00:15:58.020 { 00:15:58.020 "name": "BaseBdev1", 00:15:58.020 "uuid": "93728f93-e772-4f2c-9f33-23ff2443b5e7", 00:15:58.020 "is_configured": true, 00:15:58.020 "data_offset": 0, 00:15:58.020 "data_size": 65536 00:15:58.020 }, 00:15:58.020 { 00:15:58.020 "name": "BaseBdev2", 00:15:58.020 "uuid": "1eb29a3f-762a-4f39-89ca-5615a02c376e", 00:15:58.020 "is_configured": true, 00:15:58.020 "data_offset": 0, 00:15:58.020 "data_size": 65536 00:15:58.020 }, 00:15:58.020 { 00:15:58.020 "name": "BaseBdev3", 00:15:58.020 "uuid": "136f406b-eb83-45c6-8a62-e63e4c91bda2", 00:15:58.020 "is_configured": true, 00:15:58.020 "data_offset": 0, 00:15:58.020 "data_size": 65536 00:15:58.020 } 00:15:58.020 ] 00:15:58.020 }' 00:15:58.020 21:12:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:58.020 21:12:20 -- common/autotest_common.sh@10 -- # set +x 00:15:58.586 21:12:21 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:58.844 [2024-06-07 21:12:21.410130] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.844 21:12:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.103 21:12:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.103 "name": "Existed_Raid", 00:15:59.103 "uuid": "a304d671-b971-4250-b6ed-38c215075f39", 00:15:59.103 "strip_size_kb": 0, 00:15:59.103 "state": "online", 00:15:59.103 "raid_level": "raid1", 00:15:59.103 "superblock": false, 00:15:59.103 "num_base_bdevs": 3, 00:15:59.103 "num_base_bdevs_discovered": 2, 00:15:59.103 "num_base_bdevs_operational": 2, 00:15:59.103 "base_bdevs_list": [ 00:15:59.103 { 00:15:59.103 "name": null, 00:15:59.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.103 "is_configured": false, 00:15:59.103 "data_offset": 0, 00:15:59.103 "data_size": 65536 00:15:59.103 }, 00:15:59.103 { 00:15:59.103 "name": "BaseBdev2", 00:15:59.103 "uuid": "1eb29a3f-762a-4f39-89ca-5615a02c376e", 00:15:59.103 "is_configured": true, 00:15:59.103 "data_offset": 0, 00:15:59.103 "data_size": 65536 00:15:59.103 }, 00:15:59.103 { 00:15:59.103 "name": "BaseBdev3", 00:15:59.103 "uuid": "136f406b-eb83-45c6-8a62-e63e4c91bda2", 00:15:59.103 "is_configured": true, 00:15:59.103 "data_offset": 0, 00:15:59.103 "data_size": 65536 00:15:59.103 } 00:15:59.103 ] 00:15:59.103 }' 00:15:59.103 21:12:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.103 21:12:21 -- common/autotest_common.sh@10 -- # set +x 00:15:59.671 21:12:22 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:59.671 21:12:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:59.671 21:12:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.671 21:12:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:59.929 21:12:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:59.929 21:12:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.929 21:12:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:00.188 [2024-06-07 21:12:22.762643] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:00.188 21:12:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:00.188 21:12:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:00.188 21:12:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.188 21:12:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:00.446 21:12:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:00.446 21:12:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:00.446 21:12:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:00.704 [2024-06-07 21:12:23.273190] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:00.704 [2024-06-07 21:12:23.273412] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.704 [2024-06-07 21:12:23.273599] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.704 [2024-06-07 21:12:23.283738] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.704 [2024-06-07 21:12:23.283938] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:00.704 21:12:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:00.704 21:12:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:00.704 21:12:23 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.704 21:12:23 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:00.962 21:12:23 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:00.962 21:12:23 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:00.962 21:12:23 -- bdev/bdev_raid.sh@287 -- # killprocess 130324 00:16:00.962 21:12:23 -- common/autotest_common.sh@926 -- # '[' -z 130324 ']' 00:16:00.962 21:12:23 -- common/autotest_common.sh@930 -- # kill -0 130324 00:16:00.962 21:12:23 -- common/autotest_common.sh@931 -- # uname 00:16:00.962 21:12:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:00.962 21:12:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130324 00:16:00.962 21:12:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:00.962 21:12:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:00.962 21:12:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130324' 00:16:00.962 killing process with pid 130324 00:16:00.962 21:12:23 -- common/autotest_common.sh@945 -- # kill 130324 00:16:00.962 21:12:23 -- common/autotest_common.sh@950 -- # wait 130324 00:16:00.962 [2024-06-07 21:12:23.569640] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.962 [2024-06-07 21:12:23.569729] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.231 21:12:23 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:01.231 00:16:01.231 real 0m11.221s 00:16:01.231 user 0m21.006s 00:16:01.231 sys 0m1.215s 00:16:01.231 21:12:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.231 21:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:01.231 ************************************ 00:16:01.231 END TEST raid_state_function_test 00:16:01.231 ************************************ 00:16:01.231 21:12:23 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:01.231 21:12:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:01.231 21:12:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:01.231 21:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:01.231 ************************************ 00:16:01.231 START TEST raid_state_function_test_sb 00:16:01.231 ************************************ 00:16:01.231 21:12:23 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:16:01.231 21:12:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:01.231 21:12:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:01.231 21:12:23 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:01.231 21:12:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=130708 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130708' 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:01.232 Process raid pid: 130708 00:16:01.232 21:12:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130708 /var/tmp/spdk-raid.sock 00:16:01.232 21:12:23 -- common/autotest_common.sh@819 -- # '[' -z 130708 ']' 00:16:01.232 21:12:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:01.232 21:12:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:01.232 21:12:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:01.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:01.232 21:12:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:01.232 21:12:23 -- common/autotest_common.sh@10 -- # set +x 00:16:01.503 [2024-06-07 21:12:23.927155] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:01.503 [2024-06-07 21:12:23.927545] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.503 [2024-06-07 21:12:24.081202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.503 [2024-06-07 21:12:24.143853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.761 [2024-06-07 21:12:24.198448] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.328 21:12:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:02.328 21:12:24 -- common/autotest_common.sh@852 -- # return 0 00:16:02.328 21:12:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:02.587 [2024-06-07 21:12:25.087962] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:02.587 [2024-06-07 21:12:25.088224] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:02.587 [2024-06-07 21:12:25.088335] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.587 [2024-06-07 21:12:25.088396] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.587 [2024-06-07 21:12:25.088644] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.587 [2024-06-07 21:12:25.088738] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.587 21:12:25 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:02.587 21:12:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:02.587 21:12:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:02.587 21:12:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:02.587 21:12:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:02.587 21:12:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:02.587 21:12:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:02.587 21:12:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:02.587 21:12:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:02.587 21:12:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:02.587 21:12:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.587 21:12:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.845 21:12:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.845 "name": "Existed_Raid", 00:16:02.845 "uuid": "1c1608e2-a2ed-4683-bf67-7440c3bb845f", 00:16:02.845 "strip_size_kb": 0, 00:16:02.845 "state": "configuring", 00:16:02.845 "raid_level": "raid1", 00:16:02.845 "superblock": true, 00:16:02.845 "num_base_bdevs": 3, 00:16:02.845 "num_base_bdevs_discovered": 0, 00:16:02.845 "num_base_bdevs_operational": 3, 00:16:02.845 "base_bdevs_list": [ 00:16:02.845 { 00:16:02.845 "name": "BaseBdev1", 00:16:02.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.845 "is_configured": false, 00:16:02.845 "data_offset": 0, 00:16:02.845 "data_size": 0 00:16:02.845 }, 00:16:02.845 { 00:16:02.845 "name": "BaseBdev2", 00:16:02.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.845 "is_configured": false, 00:16:02.845 "data_offset": 0, 00:16:02.845 "data_size": 0 00:16:02.845 }, 00:16:02.845 { 00:16:02.845 "name": "BaseBdev3", 00:16:02.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.845 "is_configured": false, 00:16:02.845 "data_offset": 0, 00:16:02.845 "data_size": 0 00:16:02.845 } 00:16:02.845 ] 00:16:02.845 }' 00:16:02.845 21:12:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.845 21:12:25 -- common/autotest_common.sh@10 -- # set +x 00:16:03.409 21:12:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:03.667 [2024-06-07 21:12:26.176030] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:03.667 [2024-06-07 21:12:26.176256] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:03.667 21:12:26 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:03.925 [2024-06-07 21:12:26.384112] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:03.925 [2024-06-07 21:12:26.384378] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:03.925 [2024-06-07 21:12:26.384485] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.925 [2024-06-07 21:12:26.384560] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.925 [2024-06-07 21:12:26.384660] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:03.925 [2024-06-07 21:12:26.384734] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:03.925 21:12:26 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:04.183 [2024-06-07 21:12:26.639032] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.183 BaseBdev1 00:16:04.183 21:12:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:04.183 21:12:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:04.183 21:12:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:04.183 21:12:26 -- common/autotest_common.sh@889 -- # local i 00:16:04.183 21:12:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:04.183 21:12:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:04.183 21:12:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:04.183 21:12:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:04.442 [ 00:16:04.442 { 00:16:04.442 "name": "BaseBdev1", 00:16:04.442 "aliases": [ 00:16:04.442 "11350400-b748-497d-bae9-d5747b03fc9c" 00:16:04.442 ], 00:16:04.442 "product_name": "Malloc disk", 00:16:04.442 "block_size": 512, 00:16:04.442 "num_blocks": 65536, 00:16:04.442 "uuid": "11350400-b748-497d-bae9-d5747b03fc9c", 00:16:04.442 "assigned_rate_limits": { 00:16:04.442 "rw_ios_per_sec": 0, 00:16:04.442 "rw_mbytes_per_sec": 0, 00:16:04.442 "r_mbytes_per_sec": 0, 00:16:04.442 "w_mbytes_per_sec": 0 00:16:04.442 }, 00:16:04.442 "claimed": true, 00:16:04.442 "claim_type": "exclusive_write", 00:16:04.442 "zoned": false, 00:16:04.442 "supported_io_types": { 00:16:04.442 "read": true, 00:16:04.442 "write": true, 00:16:04.442 "unmap": true, 00:16:04.442 "write_zeroes": true, 00:16:04.442 "flush": true, 00:16:04.442 "reset": true, 00:16:04.442 "compare": false, 00:16:04.442 "compare_and_write": false, 00:16:04.442 "abort": true, 00:16:04.442 "nvme_admin": false, 00:16:04.442 "nvme_io": false 00:16:04.442 }, 00:16:04.442 "memory_domains": [ 00:16:04.442 { 00:16:04.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.442 "dma_device_type": 2 00:16:04.442 } 00:16:04.442 ], 00:16:04.442 "driver_specific": {} 00:16:04.442 } 00:16:04.442 ] 00:16:04.442 21:12:27 -- common/autotest_common.sh@895 -- # return 0 00:16:04.442 21:12:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:04.442 21:12:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:04.442 21:12:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:04.442 21:12:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:04.442 21:12:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:04.442 21:12:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:04.442 21:12:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.442 21:12:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.442 21:12:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.442 21:12:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.442 21:12:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.442 21:12:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.701 21:12:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.701 "name": "Existed_Raid", 00:16:04.701 "uuid": "664616d5-4113-4907-bb98-dc2b1d336e44", 00:16:04.701 "strip_size_kb": 0, 00:16:04.701 "state": "configuring", 00:16:04.701 "raid_level": "raid1", 00:16:04.701 "superblock": true, 00:16:04.701 "num_base_bdevs": 3, 00:16:04.701 "num_base_bdevs_discovered": 1, 00:16:04.701 "num_base_bdevs_operational": 3, 00:16:04.701 "base_bdevs_list": [ 00:16:04.701 { 00:16:04.701 "name": "BaseBdev1", 00:16:04.701 "uuid": "11350400-b748-497d-bae9-d5747b03fc9c", 00:16:04.701 "is_configured": true, 00:16:04.701 "data_offset": 2048, 00:16:04.701 "data_size": 63488 00:16:04.701 }, 00:16:04.701 { 00:16:04.701 "name": "BaseBdev2", 00:16:04.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.701 "is_configured": false, 00:16:04.701 "data_offset": 0, 00:16:04.701 "data_size": 0 00:16:04.701 }, 00:16:04.701 { 00:16:04.701 "name": "BaseBdev3", 00:16:04.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.701 "is_configured": false, 00:16:04.701 "data_offset": 0, 00:16:04.701 "data_size": 0 00:16:04.701 } 00:16:04.701 ] 00:16:04.701 }' 00:16:04.701 21:12:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.701 21:12:27 -- common/autotest_common.sh@10 -- # set +x 00:16:05.267 21:12:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:05.524 [2024-06-07 21:12:28.123490] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.524 [2024-06-07 21:12:28.123729] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:05.524 21:12:28 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:05.524 21:12:28 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:05.782 21:12:28 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:06.040 BaseBdev1 00:16:06.040 21:12:28 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:06.040 21:12:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:06.040 21:12:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:06.040 21:12:28 -- common/autotest_common.sh@889 -- # local i 00:16:06.040 21:12:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:06.040 21:12:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:06.040 21:12:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.298 21:12:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:06.556 [ 00:16:06.556 { 00:16:06.556 "name": "BaseBdev1", 00:16:06.556 "aliases": [ 00:16:06.556 "244eace1-a1ac-4370-a744-36040928a921" 00:16:06.556 ], 00:16:06.556 "product_name": "Malloc disk", 00:16:06.556 "block_size": 512, 00:16:06.556 "num_blocks": 65536, 00:16:06.556 "uuid": "244eace1-a1ac-4370-a744-36040928a921", 00:16:06.556 "assigned_rate_limits": { 00:16:06.556 "rw_ios_per_sec": 0, 00:16:06.556 "rw_mbytes_per_sec": 0, 00:16:06.556 "r_mbytes_per_sec": 0, 00:16:06.556 "w_mbytes_per_sec": 0 00:16:06.556 }, 00:16:06.556 "claimed": false, 00:16:06.556 "zoned": false, 00:16:06.556 "supported_io_types": { 00:16:06.556 "read": true, 00:16:06.556 "write": true, 00:16:06.556 "unmap": true, 00:16:06.556 "write_zeroes": true, 00:16:06.556 "flush": true, 00:16:06.556 "reset": true, 00:16:06.556 "compare": false, 00:16:06.556 "compare_and_write": false, 00:16:06.556 "abort": true, 00:16:06.556 "nvme_admin": false, 00:16:06.556 "nvme_io": false 00:16:06.556 }, 00:16:06.556 "memory_domains": [ 00:16:06.556 { 00:16:06.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.556 "dma_device_type": 2 00:16:06.556 } 00:16:06.556 ], 00:16:06.556 "driver_specific": {} 00:16:06.556 } 00:16:06.556 ] 00:16:06.556 21:12:29 -- common/autotest_common.sh@895 -- # return 0 00:16:06.556 21:12:29 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:06.556 [2024-06-07 21:12:29.199753] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.556 [2024-06-07 21:12:29.201810] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:06.556 [2024-06-07 21:12:29.202011] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:06.556 [2024-06-07 21:12:29.202139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:06.556 [2024-06-07 21:12:29.202207] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:06.556 21:12:29 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:06.556 21:12:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:06.556 21:12:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:06.556 21:12:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.556 21:12:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:06.556 21:12:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:06.556 21:12:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:06.556 21:12:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:06.556 21:12:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.556 21:12:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.556 21:12:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.556 21:12:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.556 21:12:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.556 21:12:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.814 21:12:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:06.814 "name": "Existed_Raid", 00:16:06.814 "uuid": "5d448124-c80b-4261-8931-93df1b56a556", 00:16:06.814 "strip_size_kb": 0, 00:16:06.814 "state": "configuring", 00:16:06.814 "raid_level": "raid1", 00:16:06.814 "superblock": true, 00:16:06.814 "num_base_bdevs": 3, 00:16:06.814 "num_base_bdevs_discovered": 1, 00:16:06.814 "num_base_bdevs_operational": 3, 00:16:06.814 "base_bdevs_list": [ 00:16:06.814 { 00:16:06.814 "name": "BaseBdev1", 00:16:06.814 "uuid": "244eace1-a1ac-4370-a744-36040928a921", 00:16:06.814 "is_configured": true, 00:16:06.814 "data_offset": 2048, 00:16:06.814 "data_size": 63488 00:16:06.814 }, 00:16:06.814 { 00:16:06.814 "name": "BaseBdev2", 00:16:06.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.814 "is_configured": false, 00:16:06.814 "data_offset": 0, 00:16:06.814 "data_size": 0 00:16:06.814 }, 00:16:06.814 { 00:16:06.815 "name": "BaseBdev3", 00:16:06.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.815 "is_configured": false, 00:16:06.815 "data_offset": 0, 00:16:06.815 "data_size": 0 00:16:06.815 } 00:16:06.815 ] 00:16:06.815 }' 00:16:06.815 21:12:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:06.815 21:12:29 -- common/autotest_common.sh@10 -- # set +x 00:16:07.749 21:12:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:07.749 [2024-06-07 21:12:30.305218] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.749 BaseBdev2 00:16:07.749 21:12:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:07.749 21:12:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:07.749 21:12:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:07.749 21:12:30 -- common/autotest_common.sh@889 -- # local i 00:16:07.749 21:12:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:07.749 21:12:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:07.749 21:12:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.006 21:12:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:08.264 [ 00:16:08.264 { 00:16:08.264 "name": "BaseBdev2", 00:16:08.264 "aliases": [ 00:16:08.264 "32170b03-d034-4161-9316-d2f9e46f598e" 00:16:08.264 ], 00:16:08.264 "product_name": "Malloc disk", 00:16:08.264 "block_size": 512, 00:16:08.264 "num_blocks": 65536, 00:16:08.264 "uuid": "32170b03-d034-4161-9316-d2f9e46f598e", 00:16:08.264 "assigned_rate_limits": { 00:16:08.264 "rw_ios_per_sec": 0, 00:16:08.264 "rw_mbytes_per_sec": 0, 00:16:08.264 "r_mbytes_per_sec": 0, 00:16:08.264 "w_mbytes_per_sec": 0 00:16:08.264 }, 00:16:08.264 "claimed": true, 00:16:08.264 "claim_type": "exclusive_write", 00:16:08.264 "zoned": false, 00:16:08.264 "supported_io_types": { 00:16:08.264 "read": true, 00:16:08.264 "write": true, 00:16:08.264 "unmap": true, 00:16:08.264 "write_zeroes": true, 00:16:08.264 "flush": true, 00:16:08.264 "reset": true, 00:16:08.264 "compare": false, 00:16:08.264 "compare_and_write": false, 00:16:08.264 "abort": true, 00:16:08.264 "nvme_admin": false, 00:16:08.264 "nvme_io": false 00:16:08.264 }, 00:16:08.264 "memory_domains": [ 00:16:08.264 { 00:16:08.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.264 "dma_device_type": 2 00:16:08.264 } 00:16:08.264 ], 00:16:08.264 "driver_specific": {} 00:16:08.264 } 00:16:08.264 ] 00:16:08.264 21:12:30 -- common/autotest_common.sh@895 -- # return 0 00:16:08.264 21:12:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:08.264 21:12:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:08.264 21:12:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:08.264 21:12:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:08.264 21:12:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:08.264 21:12:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:08.264 21:12:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:08.264 21:12:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:08.264 21:12:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:08.264 21:12:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:08.264 21:12:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:08.264 21:12:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:08.264 21:12:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.264 21:12:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.524 21:12:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:08.524 "name": "Existed_Raid", 00:16:08.524 "uuid": "5d448124-c80b-4261-8931-93df1b56a556", 00:16:08.524 "strip_size_kb": 0, 00:16:08.524 "state": "configuring", 00:16:08.524 "raid_level": "raid1", 00:16:08.524 "superblock": true, 00:16:08.524 "num_base_bdevs": 3, 00:16:08.524 "num_base_bdevs_discovered": 2, 00:16:08.524 "num_base_bdevs_operational": 3, 00:16:08.524 "base_bdevs_list": [ 00:16:08.524 { 00:16:08.524 "name": "BaseBdev1", 00:16:08.524 "uuid": "244eace1-a1ac-4370-a744-36040928a921", 00:16:08.524 "is_configured": true, 00:16:08.524 "data_offset": 2048, 00:16:08.524 "data_size": 63488 00:16:08.524 }, 00:16:08.524 { 00:16:08.524 "name": "BaseBdev2", 00:16:08.524 "uuid": "32170b03-d034-4161-9316-d2f9e46f598e", 00:16:08.524 "is_configured": true, 00:16:08.524 "data_offset": 2048, 00:16:08.524 "data_size": 63488 00:16:08.524 }, 00:16:08.524 { 00:16:08.524 "name": "BaseBdev3", 00:16:08.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.524 "is_configured": false, 00:16:08.524 "data_offset": 0, 00:16:08.524 "data_size": 0 00:16:08.524 } 00:16:08.524 ] 00:16:08.524 }' 00:16:08.524 21:12:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:08.524 21:12:31 -- common/autotest_common.sh@10 -- # set +x 00:16:09.098 21:12:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:09.358 [2024-06-07 21:12:31.970924] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:09.358 [2024-06-07 21:12:31.971428] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:09.358 [2024-06-07 21:12:31.971595] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:09.358 BaseBdev3 00:16:09.358 [2024-06-07 21:12:31.971777] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:09.358 [2024-06-07 21:12:31.972343] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:09.358 [2024-06-07 21:12:31.972463] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:09.358 [2024-06-07 21:12:31.972719] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.358 21:12:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:09.358 21:12:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:09.358 21:12:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:09.358 21:12:31 -- common/autotest_common.sh@889 -- # local i 00:16:09.358 21:12:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:09.358 21:12:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:09.358 21:12:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:09.618 21:12:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:09.877 [ 00:16:09.877 { 00:16:09.877 "name": "BaseBdev3", 00:16:09.877 "aliases": [ 00:16:09.877 "8dcda1df-bfcf-44db-b168-2af995edb01e" 00:16:09.877 ], 00:16:09.877 "product_name": "Malloc disk", 00:16:09.877 "block_size": 512, 00:16:09.877 "num_blocks": 65536, 00:16:09.877 "uuid": "8dcda1df-bfcf-44db-b168-2af995edb01e", 00:16:09.877 "assigned_rate_limits": { 00:16:09.877 "rw_ios_per_sec": 0, 00:16:09.877 "rw_mbytes_per_sec": 0, 00:16:09.877 "r_mbytes_per_sec": 0, 00:16:09.877 "w_mbytes_per_sec": 0 00:16:09.877 }, 00:16:09.877 "claimed": true, 00:16:09.877 "claim_type": "exclusive_write", 00:16:09.877 "zoned": false, 00:16:09.877 "supported_io_types": { 00:16:09.877 "read": true, 00:16:09.877 "write": true, 00:16:09.877 "unmap": true, 00:16:09.877 "write_zeroes": true, 00:16:09.877 "flush": true, 00:16:09.877 "reset": true, 00:16:09.877 "compare": false, 00:16:09.877 "compare_and_write": false, 00:16:09.877 "abort": true, 00:16:09.877 "nvme_admin": false, 00:16:09.877 "nvme_io": false 00:16:09.877 }, 00:16:09.877 "memory_domains": [ 00:16:09.877 { 00:16:09.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.877 "dma_device_type": 2 00:16:09.877 } 00:16:09.877 ], 00:16:09.877 "driver_specific": {} 00:16:09.877 } 00:16:09.877 ] 00:16:09.877 21:12:32 -- common/autotest_common.sh@895 -- # return 0 00:16:09.877 21:12:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:09.877 21:12:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:09.877 21:12:32 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:09.877 21:12:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:09.877 21:12:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:09.877 21:12:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:09.877 21:12:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:09.877 21:12:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:09.877 21:12:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.877 21:12:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.877 21:12:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.877 21:12:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.877 21:12:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.877 21:12:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.137 21:12:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:10.137 "name": "Existed_Raid", 00:16:10.137 "uuid": "5d448124-c80b-4261-8931-93df1b56a556", 00:16:10.137 "strip_size_kb": 0, 00:16:10.137 "state": "online", 00:16:10.137 "raid_level": "raid1", 00:16:10.137 "superblock": true, 00:16:10.137 "num_base_bdevs": 3, 00:16:10.137 "num_base_bdevs_discovered": 3, 00:16:10.137 "num_base_bdevs_operational": 3, 00:16:10.137 "base_bdevs_list": [ 00:16:10.137 { 00:16:10.137 "name": "BaseBdev1", 00:16:10.137 "uuid": "244eace1-a1ac-4370-a744-36040928a921", 00:16:10.137 "is_configured": true, 00:16:10.137 "data_offset": 2048, 00:16:10.137 "data_size": 63488 00:16:10.137 }, 00:16:10.137 { 00:16:10.137 "name": "BaseBdev2", 00:16:10.137 "uuid": "32170b03-d034-4161-9316-d2f9e46f598e", 00:16:10.137 "is_configured": true, 00:16:10.137 "data_offset": 2048, 00:16:10.137 "data_size": 63488 00:16:10.137 }, 00:16:10.137 { 00:16:10.137 "name": "BaseBdev3", 00:16:10.137 "uuid": "8dcda1df-bfcf-44db-b168-2af995edb01e", 00:16:10.137 "is_configured": true, 00:16:10.137 "data_offset": 2048, 00:16:10.137 "data_size": 63488 00:16:10.137 } 00:16:10.138 ] 00:16:10.138 }' 00:16:10.138 21:12:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:10.138 21:12:32 -- common/autotest_common.sh@10 -- # set +x 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:11.073 [2024-06-07 21:12:33.691524] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.073 21:12:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.331 21:12:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:11.331 "name": "Existed_Raid", 00:16:11.331 "uuid": "5d448124-c80b-4261-8931-93df1b56a556", 00:16:11.331 "strip_size_kb": 0, 00:16:11.331 "state": "online", 00:16:11.331 "raid_level": "raid1", 00:16:11.331 "superblock": true, 00:16:11.331 "num_base_bdevs": 3, 00:16:11.331 "num_base_bdevs_discovered": 2, 00:16:11.331 "num_base_bdevs_operational": 2, 00:16:11.331 "base_bdevs_list": [ 00:16:11.331 { 00:16:11.331 "name": null, 00:16:11.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.331 "is_configured": false, 00:16:11.331 "data_offset": 2048, 00:16:11.331 "data_size": 63488 00:16:11.331 }, 00:16:11.331 { 00:16:11.331 "name": "BaseBdev2", 00:16:11.331 "uuid": "32170b03-d034-4161-9316-d2f9e46f598e", 00:16:11.331 "is_configured": true, 00:16:11.331 "data_offset": 2048, 00:16:11.331 "data_size": 63488 00:16:11.331 }, 00:16:11.331 { 00:16:11.331 "name": "BaseBdev3", 00:16:11.331 "uuid": "8dcda1df-bfcf-44db-b168-2af995edb01e", 00:16:11.331 "is_configured": true, 00:16:11.331 "data_offset": 2048, 00:16:11.331 "data_size": 63488 00:16:11.331 } 00:16:11.331 ] 00:16:11.331 }' 00:16:11.331 21:12:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:11.331 21:12:33 -- common/autotest_common.sh@10 -- # set +x 00:16:12.265 21:12:34 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:12.265 21:12:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:12.265 21:12:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.265 21:12:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:12.265 21:12:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:12.265 21:12:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:12.265 21:12:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:12.524 [2024-06-07 21:12:35.101717] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:12.524 21:12:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:12.524 21:12:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:12.524 21:12:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.524 21:12:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:12.782 21:12:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:12.782 21:12:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:12.782 21:12:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:13.041 [2024-06-07 21:12:35.612241] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:13.041 [2024-06-07 21:12:35.612437] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.041 [2024-06-07 21:12:35.612712] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.041 [2024-06-07 21:12:35.623042] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.041 [2024-06-07 21:12:35.623257] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:16:13.041 21:12:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:13.041 21:12:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:13.041 21:12:35 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.041 21:12:35 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:13.300 21:12:35 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:13.300 21:12:35 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:13.300 21:12:35 -- bdev/bdev_raid.sh@287 -- # killprocess 130708 00:16:13.300 21:12:35 -- common/autotest_common.sh@926 -- # '[' -z 130708 ']' 00:16:13.300 21:12:35 -- common/autotest_common.sh@930 -- # kill -0 130708 00:16:13.300 21:12:35 -- common/autotest_common.sh@931 -- # uname 00:16:13.300 21:12:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:13.300 21:12:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130708 00:16:13.300 killing process with pid 130708 00:16:13.300 21:12:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:13.300 21:12:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:13.300 21:12:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130708' 00:16:13.300 21:12:35 -- common/autotest_common.sh@945 -- # kill 130708 00:16:13.300 21:12:35 -- common/autotest_common.sh@950 -- # wait 130708 00:16:13.300 [2024-06-07 21:12:35.862528] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.300 [2024-06-07 21:12:35.862668] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:13.559 00:16:13.559 real 0m12.222s 00:16:13.559 user 0m22.745s 00:16:13.559 sys 0m1.342s 00:16:13.559 21:12:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.559 ************************************ 00:16:13.559 END TEST raid_state_function_test_sb 00:16:13.559 ************************************ 00:16:13.559 21:12:36 -- common/autotest_common.sh@10 -- # set +x 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:16:13.559 21:12:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:13.559 21:12:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:13.559 21:12:36 -- common/autotest_common.sh@10 -- # set +x 00:16:13.559 ************************************ 00:16:13.559 START TEST raid_superblock_test 00:16:13.559 ************************************ 00:16:13.559 21:12:36 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:16:13.559 21:12:36 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:16:13.560 21:12:36 -- bdev/bdev_raid.sh@357 -- # raid_pid=131120 00:16:13.560 21:12:36 -- bdev/bdev_raid.sh@358 -- # waitforlisten 131120 /var/tmp/spdk-raid.sock 00:16:13.560 21:12:36 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:13.560 21:12:36 -- common/autotest_common.sh@819 -- # '[' -z 131120 ']' 00:16:13.560 21:12:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:13.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:13.560 21:12:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:13.560 21:12:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:13.560 21:12:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:13.560 21:12:36 -- common/autotest_common.sh@10 -- # set +x 00:16:13.560 [2024-06-07 21:12:36.205349] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:13.560 [2024-06-07 21:12:36.205879] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131120 ] 00:16:13.819 [2024-06-07 21:12:36.372072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.819 [2024-06-07 21:12:36.447466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.077 [2024-06-07 21:12:36.501264] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.645 21:12:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:14.645 21:12:37 -- common/autotest_common.sh@852 -- # return 0 00:16:14.645 21:12:37 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:14.645 21:12:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:14.645 21:12:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:14.645 21:12:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:14.645 21:12:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:14.645 21:12:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:14.645 21:12:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:14.645 21:12:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:14.645 21:12:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:14.903 malloc1 00:16:14.903 21:12:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:14.903 [2024-06-07 21:12:37.545191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:14.903 [2024-06-07 21:12:37.545539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.903 [2024-06-07 21:12:37.545617] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:14.903 [2024-06-07 21:12:37.545894] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.903 [2024-06-07 21:12:37.548731] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.903 [2024-06-07 21:12:37.549261] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:14.903 pt1 00:16:14.903 21:12:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:14.903 21:12:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:14.903 21:12:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:14.903 21:12:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:14.903 21:12:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:14.903 21:12:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:14.903 21:12:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:14.904 21:12:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:14.904 21:12:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:15.162 malloc2 00:16:15.162 21:12:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.421 [2024-06-07 21:12:38.008040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.421 [2024-06-07 21:12:38.008338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.421 [2024-06-07 21:12:38.008424] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:15.421 [2024-06-07 21:12:38.008589] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.421 [2024-06-07 21:12:38.011220] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.421 [2024-06-07 21:12:38.011403] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.421 pt2 00:16:15.421 21:12:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:15.421 21:12:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:15.421 21:12:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:15.421 21:12:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:15.421 21:12:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:15.421 21:12:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.421 21:12:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.421 21:12:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.421 21:12:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:15.679 malloc3 00:16:15.679 21:12:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:15.938 [2024-06-07 21:12:38.484289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:15.938 [2024-06-07 21:12:38.484548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.938 [2024-06-07 21:12:38.484626] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:15.938 [2024-06-07 21:12:38.484788] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.938 [2024-06-07 21:12:38.487212] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.938 [2024-06-07 21:12:38.487414] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:15.938 pt3 00:16:15.938 21:12:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:15.938 21:12:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:15.938 21:12:38 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:16.197 [2024-06-07 21:12:38.732422] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:16.197 [2024-06-07 21:12:38.734724] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:16.197 [2024-06-07 21:12:38.734958] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:16.197 [2024-06-07 21:12:38.735248] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:16:16.197 [2024-06-07 21:12:38.735375] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:16.197 [2024-06-07 21:12:38.735581] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:16.197 [2024-06-07 21:12:38.736128] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:16:16.197 [2024-06-07 21:12:38.736271] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:16:16.197 [2024-06-07 21:12:38.736587] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.197 21:12:38 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:16.197 21:12:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:16.197 21:12:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:16.197 21:12:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:16.197 21:12:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:16.197 21:12:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:16.197 21:12:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:16.197 21:12:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:16.197 21:12:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:16.197 21:12:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:16.197 21:12:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.197 21:12:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.455 21:12:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:16.455 "name": "raid_bdev1", 00:16:16.455 "uuid": "e2ac5d4e-a67d-4152-9fd2-bddd74922c90", 00:16:16.455 "strip_size_kb": 0, 00:16:16.455 "state": "online", 00:16:16.455 "raid_level": "raid1", 00:16:16.455 "superblock": true, 00:16:16.455 "num_base_bdevs": 3, 00:16:16.455 "num_base_bdevs_discovered": 3, 00:16:16.455 "num_base_bdevs_operational": 3, 00:16:16.455 "base_bdevs_list": [ 00:16:16.455 { 00:16:16.456 "name": "pt1", 00:16:16.456 "uuid": "99e11b34-c223-5b56-bada-6227370fe343", 00:16:16.456 "is_configured": true, 00:16:16.456 "data_offset": 2048, 00:16:16.456 "data_size": 63488 00:16:16.456 }, 00:16:16.456 { 00:16:16.456 "name": "pt2", 00:16:16.456 "uuid": "a3579fe8-44c9-5f0f-9f02-376ffdba528d", 00:16:16.456 "is_configured": true, 00:16:16.456 "data_offset": 2048, 00:16:16.456 "data_size": 63488 00:16:16.456 }, 00:16:16.456 { 00:16:16.456 "name": "pt3", 00:16:16.456 "uuid": "d4bbd41a-111e-5258-b4d0-f0cbe5c86d41", 00:16:16.456 "is_configured": true, 00:16:16.456 "data_offset": 2048, 00:16:16.456 "data_size": 63488 00:16:16.456 } 00:16:16.456 ] 00:16:16.456 }' 00:16:16.456 21:12:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:16.456 21:12:38 -- common/autotest_common.sh@10 -- # set +x 00:16:17.047 21:12:39 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:17.047 21:12:39 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:17.305 [2024-06-07 21:12:39.793295] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.305 21:12:39 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e2ac5d4e-a67d-4152-9fd2-bddd74922c90 00:16:17.305 21:12:39 -- bdev/bdev_raid.sh@380 -- # '[' -z e2ac5d4e-a67d-4152-9fd2-bddd74922c90 ']' 00:16:17.305 21:12:39 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:17.648 [2024-06-07 21:12:39.996927] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.648 [2024-06-07 21:12:39.997258] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.648 [2024-06-07 21:12:39.997511] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.648 [2024-06-07 21:12:39.997754] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.648 [2024-06-07 21:12:39.997872] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:16:17.648 21:12:40 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.648 21:12:40 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:17.648 21:12:40 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:17.648 21:12:40 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:17.648 21:12:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:17.648 21:12:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:17.907 21:12:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:17.907 21:12:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:18.165 21:12:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:18.165 21:12:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:18.424 21:12:40 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:18.424 21:12:40 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:18.683 21:12:41 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:18.683 21:12:41 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:18.683 21:12:41 -- common/autotest_common.sh@640 -- # local es=0 00:16:18.683 21:12:41 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:18.683 21:12:41 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:18.683 21:12:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:18.683 21:12:41 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:18.683 21:12:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:18.683 21:12:41 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:18.683 21:12:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:18.683 21:12:41 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:18.683 21:12:41 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:18.683 21:12:41 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:18.683 [2024-06-07 21:12:41.321398] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:18.683 [2024-06-07 21:12:41.323780] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:18.683 [2024-06-07 21:12:41.324018] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:18.683 [2024-06-07 21:12:41.324117] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:18.683 [2024-06-07 21:12:41.324383] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:18.684 [2024-06-07 21:12:41.324457] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:18.684 [2024-06-07 21:12:41.324645] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.684 [2024-06-07 21:12:41.324686] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:16:18.684 request: 00:16:18.684 { 00:16:18.684 "name": "raid_bdev1", 00:16:18.684 "raid_level": "raid1", 00:16:18.684 "base_bdevs": [ 00:16:18.684 "malloc1", 00:16:18.684 "malloc2", 00:16:18.684 "malloc3" 00:16:18.684 ], 00:16:18.684 "superblock": false, 00:16:18.684 "method": "bdev_raid_create", 00:16:18.684 "req_id": 1 00:16:18.684 } 00:16:18.684 Got JSON-RPC error response 00:16:18.684 response: 00:16:18.684 { 00:16:18.684 "code": -17, 00:16:18.684 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:18.684 } 00:16:18.684 21:12:41 -- common/autotest_common.sh@643 -- # es=1 00:16:18.684 21:12:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:18.684 21:12:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:18.684 21:12:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:18.684 21:12:41 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.684 21:12:41 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:18.942 21:12:41 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:18.942 21:12:41 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:18.942 21:12:41 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:19.201 [2024-06-07 21:12:41.737619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:19.201 [2024-06-07 21:12:41.737754] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.201 [2024-06-07 21:12:41.737828] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:19.201 [2024-06-07 21:12:41.737881] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.201 [2024-06-07 21:12:41.740459] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.201 [2024-06-07 21:12:41.740538] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:19.201 [2024-06-07 21:12:41.740676] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:19.201 [2024-06-07 21:12:41.740772] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:19.201 pt1 00:16:19.201 21:12:41 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:19.202 21:12:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:19.202 21:12:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:19.202 21:12:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:19.202 21:12:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:19.202 21:12:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:19.202 21:12:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:19.202 21:12:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:19.202 21:12:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:19.202 21:12:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:19.202 21:12:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.202 21:12:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.460 21:12:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.460 "name": "raid_bdev1", 00:16:19.460 "uuid": "e2ac5d4e-a67d-4152-9fd2-bddd74922c90", 00:16:19.460 "strip_size_kb": 0, 00:16:19.460 "state": "configuring", 00:16:19.460 "raid_level": "raid1", 00:16:19.460 "superblock": true, 00:16:19.460 "num_base_bdevs": 3, 00:16:19.460 "num_base_bdevs_discovered": 1, 00:16:19.460 "num_base_bdevs_operational": 3, 00:16:19.460 "base_bdevs_list": [ 00:16:19.460 { 00:16:19.460 "name": "pt1", 00:16:19.460 "uuid": "99e11b34-c223-5b56-bada-6227370fe343", 00:16:19.460 "is_configured": true, 00:16:19.460 "data_offset": 2048, 00:16:19.460 "data_size": 63488 00:16:19.460 }, 00:16:19.460 { 00:16:19.460 "name": null, 00:16:19.460 "uuid": "a3579fe8-44c9-5f0f-9f02-376ffdba528d", 00:16:19.460 "is_configured": false, 00:16:19.460 "data_offset": 2048, 00:16:19.460 "data_size": 63488 00:16:19.460 }, 00:16:19.460 { 00:16:19.460 "name": null, 00:16:19.460 "uuid": "d4bbd41a-111e-5258-b4d0-f0cbe5c86d41", 00:16:19.461 "is_configured": false, 00:16:19.461 "data_offset": 2048, 00:16:19.461 "data_size": 63488 00:16:19.461 } 00:16:19.461 ] 00:16:19.461 }' 00:16:19.461 21:12:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.461 21:12:41 -- common/autotest_common.sh@10 -- # set +x 00:16:20.028 21:12:42 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:20.028 21:12:42 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:20.287 [2024-06-07 21:12:42.821897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:20.287 [2024-06-07 21:12:42.822155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.287 [2024-06-07 21:12:42.822242] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:20.287 [2024-06-07 21:12:42.822371] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.287 [2024-06-07 21:12:42.822946] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.287 [2024-06-07 21:12:42.823166] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:20.287 [2024-06-07 21:12:42.823418] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:20.287 [2024-06-07 21:12:42.823481] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:20.287 pt2 00:16:20.287 21:12:42 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:20.547 [2024-06-07 21:12:43.085913] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:20.547 21:12:43 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:20.547 21:12:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:20.547 21:12:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:20.547 21:12:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:20.547 21:12:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:20.547 21:12:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:20.547 21:12:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:20.547 21:12:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:20.547 21:12:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:20.547 21:12:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:20.547 21:12:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.547 21:12:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.805 21:12:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:20.805 "name": "raid_bdev1", 00:16:20.805 "uuid": "e2ac5d4e-a67d-4152-9fd2-bddd74922c90", 00:16:20.805 "strip_size_kb": 0, 00:16:20.805 "state": "configuring", 00:16:20.805 "raid_level": "raid1", 00:16:20.805 "superblock": true, 00:16:20.805 "num_base_bdevs": 3, 00:16:20.805 "num_base_bdevs_discovered": 1, 00:16:20.805 "num_base_bdevs_operational": 3, 00:16:20.805 "base_bdevs_list": [ 00:16:20.805 { 00:16:20.805 "name": "pt1", 00:16:20.805 "uuid": "99e11b34-c223-5b56-bada-6227370fe343", 00:16:20.805 "is_configured": true, 00:16:20.805 "data_offset": 2048, 00:16:20.805 "data_size": 63488 00:16:20.805 }, 00:16:20.805 { 00:16:20.805 "name": null, 00:16:20.805 "uuid": "a3579fe8-44c9-5f0f-9f02-376ffdba528d", 00:16:20.805 "is_configured": false, 00:16:20.805 "data_offset": 2048, 00:16:20.805 "data_size": 63488 00:16:20.805 }, 00:16:20.805 { 00:16:20.805 "name": null, 00:16:20.805 "uuid": "d4bbd41a-111e-5258-b4d0-f0cbe5c86d41", 00:16:20.805 "is_configured": false, 00:16:20.805 "data_offset": 2048, 00:16:20.805 "data_size": 63488 00:16:20.805 } 00:16:20.805 ] 00:16:20.805 }' 00:16:20.805 21:12:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:20.805 21:12:43 -- common/autotest_common.sh@10 -- # set +x 00:16:21.371 21:12:43 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:21.371 21:12:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:21.371 21:12:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:21.629 [2024-06-07 21:12:44.214081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:21.629 [2024-06-07 21:12:44.214365] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.629 [2024-06-07 21:12:44.214523] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:21.629 [2024-06-07 21:12:44.214651] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.629 [2024-06-07 21:12:44.215212] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.629 [2024-06-07 21:12:44.215370] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:21.629 [2024-06-07 21:12:44.215574] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:21.629 [2024-06-07 21:12:44.215705] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:21.629 pt2 00:16:21.629 21:12:44 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:21.629 21:12:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:21.629 21:12:44 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:21.887 [2024-06-07 21:12:44.474136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:21.887 [2024-06-07 21:12:44.474398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.887 [2024-06-07 21:12:44.474540] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:21.887 [2024-06-07 21:12:44.474658] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.887 [2024-06-07 21:12:44.475214] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.887 [2024-06-07 21:12:44.475397] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:21.887 [2024-06-07 21:12:44.475594] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:21.887 [2024-06-07 21:12:44.475723] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:21.887 [2024-06-07 21:12:44.475988] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:16:21.887 [2024-06-07 21:12:44.476099] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:21.887 [2024-06-07 21:12:44.476212] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:21.887 [2024-06-07 21:12:44.476630] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:16:21.887 [2024-06-07 21:12:44.476756] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:16:21.887 [2024-06-07 21:12:44.477010] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.887 pt3 00:16:21.887 21:12:44 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:21.887 21:12:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:21.887 21:12:44 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:21.887 21:12:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:21.887 21:12:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:21.887 21:12:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:21.887 21:12:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:21.887 21:12:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:21.887 21:12:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.887 21:12:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.887 21:12:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.887 21:12:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.887 21:12:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.887 21:12:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.146 21:12:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.146 "name": "raid_bdev1", 00:16:22.146 "uuid": "e2ac5d4e-a67d-4152-9fd2-bddd74922c90", 00:16:22.146 "strip_size_kb": 0, 00:16:22.146 "state": "online", 00:16:22.146 "raid_level": "raid1", 00:16:22.146 "superblock": true, 00:16:22.146 "num_base_bdevs": 3, 00:16:22.146 "num_base_bdevs_discovered": 3, 00:16:22.146 "num_base_bdevs_operational": 3, 00:16:22.146 "base_bdevs_list": [ 00:16:22.146 { 00:16:22.146 "name": "pt1", 00:16:22.146 "uuid": "99e11b34-c223-5b56-bada-6227370fe343", 00:16:22.146 "is_configured": true, 00:16:22.146 "data_offset": 2048, 00:16:22.146 "data_size": 63488 00:16:22.146 }, 00:16:22.146 { 00:16:22.146 "name": "pt2", 00:16:22.146 "uuid": "a3579fe8-44c9-5f0f-9f02-376ffdba528d", 00:16:22.146 "is_configured": true, 00:16:22.146 "data_offset": 2048, 00:16:22.146 "data_size": 63488 00:16:22.146 }, 00:16:22.146 { 00:16:22.146 "name": "pt3", 00:16:22.146 "uuid": "d4bbd41a-111e-5258-b4d0-f0cbe5c86d41", 00:16:22.146 "is_configured": true, 00:16:22.146 "data_offset": 2048, 00:16:22.146 "data_size": 63488 00:16:22.146 } 00:16:22.146 ] 00:16:22.146 }' 00:16:22.146 21:12:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.146 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:16:22.711 21:12:45 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:22.711 21:12:45 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:22.969 [2024-06-07 21:12:45.558658] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.969 21:12:45 -- bdev/bdev_raid.sh@430 -- # '[' e2ac5d4e-a67d-4152-9fd2-bddd74922c90 '!=' e2ac5d4e-a67d-4152-9fd2-bddd74922c90 ']' 00:16:22.969 21:12:45 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:22.969 21:12:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:22.969 21:12:45 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:22.969 21:12:45 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:23.228 [2024-06-07 21:12:45.798506] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:23.228 21:12:45 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:23.228 21:12:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:23.228 21:12:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:23.228 21:12:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:23.228 21:12:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:23.228 21:12:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:23.228 21:12:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:23.228 21:12:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:23.228 21:12:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:23.228 21:12:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:23.228 21:12:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.228 21:12:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.487 21:12:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.487 "name": "raid_bdev1", 00:16:23.487 "uuid": "e2ac5d4e-a67d-4152-9fd2-bddd74922c90", 00:16:23.487 "strip_size_kb": 0, 00:16:23.487 "state": "online", 00:16:23.487 "raid_level": "raid1", 00:16:23.487 "superblock": true, 00:16:23.487 "num_base_bdevs": 3, 00:16:23.487 "num_base_bdevs_discovered": 2, 00:16:23.487 "num_base_bdevs_operational": 2, 00:16:23.487 "base_bdevs_list": [ 00:16:23.487 { 00:16:23.487 "name": null, 00:16:23.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.487 "is_configured": false, 00:16:23.487 "data_offset": 2048, 00:16:23.487 "data_size": 63488 00:16:23.487 }, 00:16:23.487 { 00:16:23.487 "name": "pt2", 00:16:23.487 "uuid": "a3579fe8-44c9-5f0f-9f02-376ffdba528d", 00:16:23.487 "is_configured": true, 00:16:23.487 "data_offset": 2048, 00:16:23.487 "data_size": 63488 00:16:23.487 }, 00:16:23.487 { 00:16:23.487 "name": "pt3", 00:16:23.487 "uuid": "d4bbd41a-111e-5258-b4d0-f0cbe5c86d41", 00:16:23.487 "is_configured": true, 00:16:23.487 "data_offset": 2048, 00:16:23.487 "data_size": 63488 00:16:23.487 } 00:16:23.487 ] 00:16:23.487 }' 00:16:23.487 21:12:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.487 21:12:46 -- common/autotest_common.sh@10 -- # set +x 00:16:24.054 21:12:46 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:24.313 [2024-06-07 21:12:46.906709] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.313 [2024-06-07 21:12:46.906974] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.313 [2024-06-07 21:12:46.907154] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.313 [2024-06-07 21:12:46.907369] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.313 [2024-06-07 21:12:46.907467] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:16:24.313 21:12:46 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.313 21:12:46 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:24.571 21:12:47 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:24.571 21:12:47 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:24.571 21:12:47 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:24.571 21:12:47 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:24.571 21:12:47 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:24.830 21:12:47 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:24.830 21:12:47 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:24.830 21:12:47 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:25.089 21:12:47 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:25.089 21:12:47 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:25.089 21:12:47 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:25.089 21:12:47 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:25.089 21:12:47 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:25.348 [2024-06-07 21:12:47.778898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:25.348 [2024-06-07 21:12:47.779151] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.348 [2024-06-07 21:12:47.779333] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:25.348 [2024-06-07 21:12:47.779456] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.348 [2024-06-07 21:12:47.781833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.348 [2024-06-07 21:12:47.782025] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:25.348 [2024-06-07 21:12:47.782260] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:25.348 [2024-06-07 21:12:47.782418] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:25.348 pt2 00:16:25.348 21:12:47 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:25.348 21:12:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:25.348 21:12:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:25.348 21:12:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:25.348 21:12:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:25.348 21:12:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:25.348 21:12:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:25.348 21:12:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:25.348 21:12:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:25.348 21:12:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:25.348 21:12:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.348 21:12:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.348 21:12:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:25.348 "name": "raid_bdev1", 00:16:25.348 "uuid": "e2ac5d4e-a67d-4152-9fd2-bddd74922c90", 00:16:25.348 "strip_size_kb": 0, 00:16:25.348 "state": "configuring", 00:16:25.348 "raid_level": "raid1", 00:16:25.348 "superblock": true, 00:16:25.348 "num_base_bdevs": 3, 00:16:25.348 "num_base_bdevs_discovered": 1, 00:16:25.348 "num_base_bdevs_operational": 2, 00:16:25.348 "base_bdevs_list": [ 00:16:25.348 { 00:16:25.348 "name": null, 00:16:25.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.348 "is_configured": false, 00:16:25.348 "data_offset": 2048, 00:16:25.348 "data_size": 63488 00:16:25.348 }, 00:16:25.348 { 00:16:25.348 "name": "pt2", 00:16:25.349 "uuid": "a3579fe8-44c9-5f0f-9f02-376ffdba528d", 00:16:25.349 "is_configured": true, 00:16:25.349 "data_offset": 2048, 00:16:25.349 "data_size": 63488 00:16:25.349 }, 00:16:25.349 { 00:16:25.349 "name": null, 00:16:25.349 "uuid": "d4bbd41a-111e-5258-b4d0-f0cbe5c86d41", 00:16:25.349 "is_configured": false, 00:16:25.349 "data_offset": 2048, 00:16:25.349 "data_size": 63488 00:16:25.349 } 00:16:25.349 ] 00:16:25.349 }' 00:16:25.349 21:12:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:25.349 21:12:47 -- common/autotest_common.sh@10 -- # set +x 00:16:26.284 21:12:48 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:16:26.284 21:12:48 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:26.284 21:12:48 -- bdev/bdev_raid.sh@462 -- # i=2 00:16:26.284 21:12:48 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:26.284 [2024-06-07 21:12:48.891126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:26.284 [2024-06-07 21:12:48.891432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.284 [2024-06-07 21:12:48.891514] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:26.284 [2024-06-07 21:12:48.891734] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.284 [2024-06-07 21:12:48.892251] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.284 [2024-06-07 21:12:48.892436] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:26.284 [2024-06-07 21:12:48.892665] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:26.284 [2024-06-07 21:12:48.892797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:26.284 [2024-06-07 21:12:48.892971] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:16:26.284 [2024-06-07 21:12:48.893075] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:26.284 [2024-06-07 21:12:48.893206] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:26.284 [2024-06-07 21:12:48.893699] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:16:26.284 [2024-06-07 21:12:48.893825] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:16:26.284 [2024-06-07 21:12:48.894021] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.284 pt3 00:16:26.284 21:12:48 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:26.284 21:12:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:26.284 21:12:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:26.284 21:12:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:26.284 21:12:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:26.284 21:12:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:26.284 21:12:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:26.285 21:12:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:26.285 21:12:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:26.285 21:12:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:26.285 21:12:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.285 21:12:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.544 21:12:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:26.544 "name": "raid_bdev1", 00:16:26.544 "uuid": "e2ac5d4e-a67d-4152-9fd2-bddd74922c90", 00:16:26.544 "strip_size_kb": 0, 00:16:26.544 "state": "online", 00:16:26.544 "raid_level": "raid1", 00:16:26.544 "superblock": true, 00:16:26.544 "num_base_bdevs": 3, 00:16:26.544 "num_base_bdevs_discovered": 2, 00:16:26.544 "num_base_bdevs_operational": 2, 00:16:26.544 "base_bdevs_list": [ 00:16:26.544 { 00:16:26.544 "name": null, 00:16:26.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.544 "is_configured": false, 00:16:26.544 "data_offset": 2048, 00:16:26.544 "data_size": 63488 00:16:26.544 }, 00:16:26.544 { 00:16:26.544 "name": "pt2", 00:16:26.544 "uuid": "a3579fe8-44c9-5f0f-9f02-376ffdba528d", 00:16:26.544 "is_configured": true, 00:16:26.544 "data_offset": 2048, 00:16:26.544 "data_size": 63488 00:16:26.544 }, 00:16:26.544 { 00:16:26.544 "name": "pt3", 00:16:26.544 "uuid": "d4bbd41a-111e-5258-b4d0-f0cbe5c86d41", 00:16:26.544 "is_configured": true, 00:16:26.544 "data_offset": 2048, 00:16:26.544 "data_size": 63488 00:16:26.544 } 00:16:26.544 ] 00:16:26.544 }' 00:16:26.544 21:12:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:26.544 21:12:49 -- common/autotest_common.sh@10 -- # set +x 00:16:27.479 21:12:49 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:16:27.479 21:12:49 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:27.479 [2024-06-07 21:12:50.051484] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.479 [2024-06-07 21:12:50.051712] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.479 [2024-06-07 21:12:50.051883] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.479 [2024-06-07 21:12:50.052074] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.479 [2024-06-07 21:12:50.052219] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:16:27.479 21:12:50 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.479 21:12:50 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:16:27.737 21:12:50 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:16:27.737 21:12:50 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:16:27.737 21:12:50 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:27.997 [2024-06-07 21:12:50.551603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:27.997 [2024-06-07 21:12:50.551881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.997 [2024-06-07 21:12:50.551958] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:27.997 [2024-06-07 21:12:50.552229] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.997 [2024-06-07 21:12:50.554605] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.997 [2024-06-07 21:12:50.554793] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:27.997 [2024-06-07 21:12:50.555062] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:27.997 [2024-06-07 21:12:50.555203] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:27.997 pt1 00:16:27.997 21:12:50 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:27.997 21:12:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:27.997 21:12:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:27.997 21:12:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:27.997 21:12:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:27.997 21:12:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:27.997 21:12:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:27.997 21:12:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:27.997 21:12:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:27.997 21:12:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:27.997 21:12:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.997 21:12:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.256 21:12:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:28.256 "name": "raid_bdev1", 00:16:28.256 "uuid": "e2ac5d4e-a67d-4152-9fd2-bddd74922c90", 00:16:28.256 "strip_size_kb": 0, 00:16:28.256 "state": "configuring", 00:16:28.256 "raid_level": "raid1", 00:16:28.256 "superblock": true, 00:16:28.256 "num_base_bdevs": 3, 00:16:28.256 "num_base_bdevs_discovered": 1, 00:16:28.256 "num_base_bdevs_operational": 3, 00:16:28.256 "base_bdevs_list": [ 00:16:28.256 { 00:16:28.256 "name": "pt1", 00:16:28.256 "uuid": "99e11b34-c223-5b56-bada-6227370fe343", 00:16:28.256 "is_configured": true, 00:16:28.256 "data_offset": 2048, 00:16:28.256 "data_size": 63488 00:16:28.256 }, 00:16:28.256 { 00:16:28.256 "name": null, 00:16:28.256 "uuid": "a3579fe8-44c9-5f0f-9f02-376ffdba528d", 00:16:28.256 "is_configured": false, 00:16:28.256 "data_offset": 2048, 00:16:28.256 "data_size": 63488 00:16:28.256 }, 00:16:28.256 { 00:16:28.256 "name": null, 00:16:28.256 "uuid": "d4bbd41a-111e-5258-b4d0-f0cbe5c86d41", 00:16:28.256 "is_configured": false, 00:16:28.256 "data_offset": 2048, 00:16:28.256 "data_size": 63488 00:16:28.256 } 00:16:28.256 ] 00:16:28.256 }' 00:16:28.256 21:12:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:28.256 21:12:50 -- common/autotest_common.sh@10 -- # set +x 00:16:28.824 21:12:51 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:16:28.824 21:12:51 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:16:28.824 21:12:51 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:29.082 21:12:51 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:16:29.082 21:12:51 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:16:29.082 21:12:51 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:29.341 21:12:51 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:16:29.341 21:12:51 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:16:29.341 21:12:51 -- bdev/bdev_raid.sh@489 -- # i=2 00:16:29.341 21:12:51 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:29.600 [2024-06-07 21:12:52.087980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:29.600 [2024-06-07 21:12:52.088294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.600 [2024-06-07 21:12:52.088444] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:29.600 [2024-06-07 21:12:52.088614] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.600 [2024-06-07 21:12:52.089222] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.600 [2024-06-07 21:12:52.089387] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:29.600 [2024-06-07 21:12:52.089609] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:29.600 [2024-06-07 21:12:52.089724] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:29.600 [2024-06-07 21:12:52.089820] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.600 [2024-06-07 21:12:52.089971] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:16:29.600 [2024-06-07 21:12:52.090136] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:29.600 pt3 00:16:29.600 21:12:52 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:29.600 21:12:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:29.601 21:12:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:29.601 21:12:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:29.601 21:12:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:29.601 21:12:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:29.601 21:12:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:29.601 21:12:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:29.601 21:12:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:29.601 21:12:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:29.601 21:12:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.601 21:12:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.860 21:12:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:29.860 "name": "raid_bdev1", 00:16:29.860 "uuid": "e2ac5d4e-a67d-4152-9fd2-bddd74922c90", 00:16:29.860 "strip_size_kb": 0, 00:16:29.860 "state": "configuring", 00:16:29.860 "raid_level": "raid1", 00:16:29.860 "superblock": true, 00:16:29.860 "num_base_bdevs": 3, 00:16:29.860 "num_base_bdevs_discovered": 1, 00:16:29.860 "num_base_bdevs_operational": 2, 00:16:29.860 "base_bdevs_list": [ 00:16:29.860 { 00:16:29.860 "name": null, 00:16:29.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.860 "is_configured": false, 00:16:29.860 "data_offset": 2048, 00:16:29.860 "data_size": 63488 00:16:29.860 }, 00:16:29.860 { 00:16:29.860 "name": null, 00:16:29.860 "uuid": "a3579fe8-44c9-5f0f-9f02-376ffdba528d", 00:16:29.860 "is_configured": false, 00:16:29.860 "data_offset": 2048, 00:16:29.860 "data_size": 63488 00:16:29.860 }, 00:16:29.860 { 00:16:29.860 "name": "pt3", 00:16:29.860 "uuid": "d4bbd41a-111e-5258-b4d0-f0cbe5c86d41", 00:16:29.860 "is_configured": true, 00:16:29.860 "data_offset": 2048, 00:16:29.860 "data_size": 63488 00:16:29.860 } 00:16:29.860 ] 00:16:29.860 }' 00:16:29.860 21:12:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:29.860 21:12:52 -- common/autotest_common.sh@10 -- # set +x 00:16:30.426 21:12:53 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:16:30.426 21:12:53 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:16:30.426 21:12:53 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:30.683 [2024-06-07 21:12:53.268257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:30.683 [2024-06-07 21:12:53.268574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.683 [2024-06-07 21:12:53.268717] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:30.683 [2024-06-07 21:12:53.268837] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.683 [2024-06-07 21:12:53.269480] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.683 [2024-06-07 21:12:53.269647] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:30.683 [2024-06-07 21:12:53.269823] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:30.683 [2024-06-07 21:12:53.269969] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:30.683 [2024-06-07 21:12:53.270130] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:16:30.683 [2024-06-07 21:12:53.270253] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:30.684 [2024-06-07 21:12:53.270385] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:30.684 [2024-06-07 21:12:53.270796] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:16:30.684 [2024-06-07 21:12:53.270933] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:16:30.684 [2024-06-07 21:12:53.271145] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.684 pt2 00:16:30.684 21:12:53 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:16:30.684 21:12:53 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:16:30.684 21:12:53 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:30.684 21:12:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:30.684 21:12:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:30.684 21:12:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:30.684 21:12:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:30.684 21:12:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:30.684 21:12:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:30.684 21:12:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:30.684 21:12:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:30.684 21:12:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:30.684 21:12:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.684 21:12:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.941 21:12:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:30.941 "name": "raid_bdev1", 00:16:30.941 "uuid": "e2ac5d4e-a67d-4152-9fd2-bddd74922c90", 00:16:30.941 "strip_size_kb": 0, 00:16:30.941 "state": "online", 00:16:30.941 "raid_level": "raid1", 00:16:30.941 "superblock": true, 00:16:30.941 "num_base_bdevs": 3, 00:16:30.941 "num_base_bdevs_discovered": 2, 00:16:30.941 "num_base_bdevs_operational": 2, 00:16:30.941 "base_bdevs_list": [ 00:16:30.941 { 00:16:30.941 "name": null, 00:16:30.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.941 "is_configured": false, 00:16:30.941 "data_offset": 2048, 00:16:30.941 "data_size": 63488 00:16:30.941 }, 00:16:30.941 { 00:16:30.941 "name": "pt2", 00:16:30.941 "uuid": "a3579fe8-44c9-5f0f-9f02-376ffdba528d", 00:16:30.941 "is_configured": true, 00:16:30.941 "data_offset": 2048, 00:16:30.941 "data_size": 63488 00:16:30.941 }, 00:16:30.941 { 00:16:30.941 "name": "pt3", 00:16:30.941 "uuid": "d4bbd41a-111e-5258-b4d0-f0cbe5c86d41", 00:16:30.941 "is_configured": true, 00:16:30.941 "data_offset": 2048, 00:16:30.941 "data_size": 63488 00:16:30.941 } 00:16:30.941 ] 00:16:30.941 }' 00:16:30.941 21:12:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:30.941 21:12:53 -- common/autotest_common.sh@10 -- # set +x 00:16:31.526 21:12:54 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:31.526 21:12:54 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:16:31.794 [2024-06-07 21:12:54.360712] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.794 21:12:54 -- bdev/bdev_raid.sh@506 -- # '[' e2ac5d4e-a67d-4152-9fd2-bddd74922c90 '!=' e2ac5d4e-a67d-4152-9fd2-bddd74922c90 ']' 00:16:31.794 21:12:54 -- bdev/bdev_raid.sh@511 -- # killprocess 131120 00:16:31.794 21:12:54 -- common/autotest_common.sh@926 -- # '[' -z 131120 ']' 00:16:31.794 21:12:54 -- common/autotest_common.sh@930 -- # kill -0 131120 00:16:31.794 21:12:54 -- common/autotest_common.sh@931 -- # uname 00:16:31.794 21:12:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:31.794 21:12:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131120 00:16:31.794 killing process with pid 131120 00:16:31.794 21:12:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:31.794 21:12:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:31.794 21:12:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131120' 00:16:31.794 21:12:54 -- common/autotest_common.sh@945 -- # kill 131120 00:16:31.794 21:12:54 -- common/autotest_common.sh@950 -- # wait 131120 00:16:31.794 [2024-06-07 21:12:54.394778] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:31.794 [2024-06-07 21:12:54.394882] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.794 [2024-06-07 21:12:54.394972] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.794 [2024-06-07 21:12:54.395090] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:16:31.794 [2024-06-07 21:12:54.426057] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:32.052 ************************************ 00:16:32.052 END TEST raid_superblock_test 00:16:32.052 ************************************ 00:16:32.052 21:12:54 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:32.052 00:16:32.052 real 0m18.506s 00:16:32.052 user 0m35.152s 00:16:32.052 sys 0m2.072s 00:16:32.052 21:12:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:32.052 21:12:54 -- common/autotest_common.sh@10 -- # set +x 00:16:32.052 21:12:54 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:16:32.052 21:12:54 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:32.052 21:12:54 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:16:32.052 21:12:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:32.052 21:12:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:32.052 21:12:54 -- common/autotest_common.sh@10 -- # set +x 00:16:32.052 ************************************ 00:16:32.052 START TEST raid_state_function_test 00:16:32.053 ************************************ 00:16:32.053 21:12:54 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:32.053 21:12:54 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:32.311 21:12:54 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:32.311 21:12:54 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:32.311 21:12:54 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:32.311 21:12:54 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:32.311 21:12:54 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:32.311 21:12:54 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:32.311 21:12:54 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:32.311 21:12:54 -- bdev/bdev_raid.sh@226 -- # raid_pid=131755 00:16:32.311 21:12:54 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:32.311 Process raid pid: 131755 00:16:32.311 21:12:54 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131755' 00:16:32.311 21:12:54 -- bdev/bdev_raid.sh@228 -- # waitforlisten 131755 /var/tmp/spdk-raid.sock 00:16:32.311 21:12:54 -- common/autotest_common.sh@819 -- # '[' -z 131755 ']' 00:16:32.311 21:12:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:32.311 21:12:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:32.311 21:12:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:32.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:32.311 21:12:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:32.311 21:12:54 -- common/autotest_common.sh@10 -- # set +x 00:16:32.311 [2024-06-07 21:12:54.782035] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:32.311 [2024-06-07 21:12:54.782484] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.311 [2024-06-07 21:12:54.946098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.569 [2024-06-07 21:12:55.016175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.569 [2024-06-07 21:12:55.069158] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.136 21:12:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:33.136 21:12:55 -- common/autotest_common.sh@852 -- # return 0 00:16:33.136 21:12:55 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:33.394 [2024-06-07 21:12:55.938983] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.394 [2024-06-07 21:12:55.939382] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.394 [2024-06-07 21:12:55.939508] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.394 [2024-06-07 21:12:55.939568] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.395 [2024-06-07 21:12:55.939684] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:33.395 [2024-06-07 21:12:55.939761] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:33.395 [2024-06-07 21:12:55.939880] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:33.395 [2024-06-07 21:12:55.939999] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:33.395 21:12:55 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:33.395 21:12:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:33.395 21:12:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:33.395 21:12:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:33.395 21:12:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:33.395 21:12:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:33.395 21:12:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.395 21:12:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.395 21:12:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.395 21:12:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.395 21:12:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.395 21:12:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.653 21:12:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:33.653 "name": "Existed_Raid", 00:16:33.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.653 "strip_size_kb": 64, 00:16:33.653 "state": "configuring", 00:16:33.653 "raid_level": "raid0", 00:16:33.653 "superblock": false, 00:16:33.653 "num_base_bdevs": 4, 00:16:33.654 "num_base_bdevs_discovered": 0, 00:16:33.654 "num_base_bdevs_operational": 4, 00:16:33.654 "base_bdevs_list": [ 00:16:33.654 { 00:16:33.654 "name": "BaseBdev1", 00:16:33.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.654 "is_configured": false, 00:16:33.654 "data_offset": 0, 00:16:33.654 "data_size": 0 00:16:33.654 }, 00:16:33.654 { 00:16:33.654 "name": "BaseBdev2", 00:16:33.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.654 "is_configured": false, 00:16:33.654 "data_offset": 0, 00:16:33.654 "data_size": 0 00:16:33.654 }, 00:16:33.654 { 00:16:33.654 "name": "BaseBdev3", 00:16:33.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.654 "is_configured": false, 00:16:33.654 "data_offset": 0, 00:16:33.654 "data_size": 0 00:16:33.654 }, 00:16:33.654 { 00:16:33.654 "name": "BaseBdev4", 00:16:33.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.654 "is_configured": false, 00:16:33.654 "data_offset": 0, 00:16:33.654 "data_size": 0 00:16:33.654 } 00:16:33.654 ] 00:16:33.654 }' 00:16:33.654 21:12:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:33.654 21:12:56 -- common/autotest_common.sh@10 -- # set +x 00:16:34.220 21:12:56 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:34.478 [2024-06-07 21:12:57.023124] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.478 [2024-06-07 21:12:57.023400] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:34.478 21:12:57 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:34.736 [2024-06-07 21:12:57.215160] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.736 [2024-06-07 21:12:57.215378] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.736 [2024-06-07 21:12:57.215527] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.736 [2024-06-07 21:12:57.215598] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.736 [2024-06-07 21:12:57.215832] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:34.736 [2024-06-07 21:12:57.215909] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:34.736 [2024-06-07 21:12:57.216215] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:34.736 [2024-06-07 21:12:57.216286] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:34.736 21:12:57 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:34.994 [2024-06-07 21:12:57.434433] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.994 BaseBdev1 00:16:34.994 21:12:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:34.994 21:12:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:34.994 21:12:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:34.994 21:12:57 -- common/autotest_common.sh@889 -- # local i 00:16:34.994 21:12:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:34.994 21:12:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:34.994 21:12:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:34.995 21:12:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:35.253 [ 00:16:35.253 { 00:16:35.253 "name": "BaseBdev1", 00:16:35.253 "aliases": [ 00:16:35.253 "40ce6d23-ae73-45f0-a734-9e1fe7ba47c4" 00:16:35.253 ], 00:16:35.253 "product_name": "Malloc disk", 00:16:35.253 "block_size": 512, 00:16:35.253 "num_blocks": 65536, 00:16:35.253 "uuid": "40ce6d23-ae73-45f0-a734-9e1fe7ba47c4", 00:16:35.253 "assigned_rate_limits": { 00:16:35.253 "rw_ios_per_sec": 0, 00:16:35.253 "rw_mbytes_per_sec": 0, 00:16:35.254 "r_mbytes_per_sec": 0, 00:16:35.254 "w_mbytes_per_sec": 0 00:16:35.254 }, 00:16:35.254 "claimed": true, 00:16:35.254 "claim_type": "exclusive_write", 00:16:35.254 "zoned": false, 00:16:35.254 "supported_io_types": { 00:16:35.254 "read": true, 00:16:35.254 "write": true, 00:16:35.254 "unmap": true, 00:16:35.254 "write_zeroes": true, 00:16:35.254 "flush": true, 00:16:35.254 "reset": true, 00:16:35.254 "compare": false, 00:16:35.254 "compare_and_write": false, 00:16:35.254 "abort": true, 00:16:35.254 "nvme_admin": false, 00:16:35.254 "nvme_io": false 00:16:35.254 }, 00:16:35.254 "memory_domains": [ 00:16:35.254 { 00:16:35.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.254 "dma_device_type": 2 00:16:35.254 } 00:16:35.254 ], 00:16:35.254 "driver_specific": {} 00:16:35.254 } 00:16:35.254 ] 00:16:35.254 21:12:57 -- common/autotest_common.sh@895 -- # return 0 00:16:35.254 21:12:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:35.254 21:12:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:35.254 21:12:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:35.254 21:12:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:35.254 21:12:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:35.254 21:12:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:35.254 21:12:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:35.254 21:12:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:35.254 21:12:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:35.254 21:12:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:35.254 21:12:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.254 21:12:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.513 21:12:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.513 "name": "Existed_Raid", 00:16:35.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.513 "strip_size_kb": 64, 00:16:35.513 "state": "configuring", 00:16:35.513 "raid_level": "raid0", 00:16:35.513 "superblock": false, 00:16:35.513 "num_base_bdevs": 4, 00:16:35.513 "num_base_bdevs_discovered": 1, 00:16:35.513 "num_base_bdevs_operational": 4, 00:16:35.513 "base_bdevs_list": [ 00:16:35.513 { 00:16:35.513 "name": "BaseBdev1", 00:16:35.513 "uuid": "40ce6d23-ae73-45f0-a734-9e1fe7ba47c4", 00:16:35.513 "is_configured": true, 00:16:35.513 "data_offset": 0, 00:16:35.513 "data_size": 65536 00:16:35.513 }, 00:16:35.513 { 00:16:35.513 "name": "BaseBdev2", 00:16:35.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.513 "is_configured": false, 00:16:35.513 "data_offset": 0, 00:16:35.513 "data_size": 0 00:16:35.513 }, 00:16:35.513 { 00:16:35.513 "name": "BaseBdev3", 00:16:35.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.513 "is_configured": false, 00:16:35.513 "data_offset": 0, 00:16:35.513 "data_size": 0 00:16:35.513 }, 00:16:35.513 { 00:16:35.513 "name": "BaseBdev4", 00:16:35.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.513 "is_configured": false, 00:16:35.513 "data_offset": 0, 00:16:35.513 "data_size": 0 00:16:35.513 } 00:16:35.513 ] 00:16:35.513 }' 00:16:35.513 21:12:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.513 21:12:58 -- common/autotest_common.sh@10 -- # set +x 00:16:36.080 21:12:58 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:36.339 [2024-06-07 21:12:58.986841] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.339 [2024-06-07 21:12:58.987077] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:36.339 21:12:58 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:36.339 21:12:58 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:36.906 [2024-06-07 21:12:59.282966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.906 [2024-06-07 21:12:59.285009] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.906 [2024-06-07 21:12:59.285258] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.906 [2024-06-07 21:12:59.285380] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.906 [2024-06-07 21:12:59.285520] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.906 [2024-06-07 21:12:59.285612] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:36.906 [2024-06-07 21:12:59.285663] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:36.906 "name": "Existed_Raid", 00:16:36.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.906 "strip_size_kb": 64, 00:16:36.906 "state": "configuring", 00:16:36.906 "raid_level": "raid0", 00:16:36.906 "superblock": false, 00:16:36.906 "num_base_bdevs": 4, 00:16:36.906 "num_base_bdevs_discovered": 1, 00:16:36.906 "num_base_bdevs_operational": 4, 00:16:36.906 "base_bdevs_list": [ 00:16:36.906 { 00:16:36.906 "name": "BaseBdev1", 00:16:36.906 "uuid": "40ce6d23-ae73-45f0-a734-9e1fe7ba47c4", 00:16:36.906 "is_configured": true, 00:16:36.906 "data_offset": 0, 00:16:36.906 "data_size": 65536 00:16:36.906 }, 00:16:36.906 { 00:16:36.906 "name": "BaseBdev2", 00:16:36.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.906 "is_configured": false, 00:16:36.906 "data_offset": 0, 00:16:36.906 "data_size": 0 00:16:36.906 }, 00:16:36.906 { 00:16:36.906 "name": "BaseBdev3", 00:16:36.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.906 "is_configured": false, 00:16:36.906 "data_offset": 0, 00:16:36.906 "data_size": 0 00:16:36.906 }, 00:16:36.906 { 00:16:36.906 "name": "BaseBdev4", 00:16:36.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.906 "is_configured": false, 00:16:36.906 "data_offset": 0, 00:16:36.906 "data_size": 0 00:16:36.906 } 00:16:36.906 ] 00:16:36.906 }' 00:16:36.906 21:12:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:36.906 21:12:59 -- common/autotest_common.sh@10 -- # set +x 00:16:37.845 21:13:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:37.845 [2024-06-07 21:13:00.448857] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.845 BaseBdev2 00:16:37.845 21:13:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:37.845 21:13:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:37.845 21:13:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:37.845 21:13:00 -- common/autotest_common.sh@889 -- # local i 00:16:37.845 21:13:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:37.845 21:13:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:37.845 21:13:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.103 21:13:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:38.413 [ 00:16:38.413 { 00:16:38.413 "name": "BaseBdev2", 00:16:38.413 "aliases": [ 00:16:38.413 "3edcf972-cfcd-4a59-814c-d8e326654a41" 00:16:38.413 ], 00:16:38.413 "product_name": "Malloc disk", 00:16:38.413 "block_size": 512, 00:16:38.413 "num_blocks": 65536, 00:16:38.413 "uuid": "3edcf972-cfcd-4a59-814c-d8e326654a41", 00:16:38.413 "assigned_rate_limits": { 00:16:38.413 "rw_ios_per_sec": 0, 00:16:38.413 "rw_mbytes_per_sec": 0, 00:16:38.413 "r_mbytes_per_sec": 0, 00:16:38.413 "w_mbytes_per_sec": 0 00:16:38.413 }, 00:16:38.413 "claimed": true, 00:16:38.413 "claim_type": "exclusive_write", 00:16:38.413 "zoned": false, 00:16:38.413 "supported_io_types": { 00:16:38.413 "read": true, 00:16:38.413 "write": true, 00:16:38.413 "unmap": true, 00:16:38.413 "write_zeroes": true, 00:16:38.413 "flush": true, 00:16:38.413 "reset": true, 00:16:38.413 "compare": false, 00:16:38.413 "compare_and_write": false, 00:16:38.413 "abort": true, 00:16:38.413 "nvme_admin": false, 00:16:38.413 "nvme_io": false 00:16:38.413 }, 00:16:38.413 "memory_domains": [ 00:16:38.413 { 00:16:38.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.413 "dma_device_type": 2 00:16:38.413 } 00:16:38.413 ], 00:16:38.413 "driver_specific": {} 00:16:38.413 } 00:16:38.413 ] 00:16:38.413 21:13:00 -- common/autotest_common.sh@895 -- # return 0 00:16:38.413 21:13:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:38.413 21:13:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:38.413 21:13:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:38.413 21:13:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:38.413 21:13:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:38.413 21:13:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:38.413 21:13:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:38.413 21:13:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:38.413 21:13:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:38.413 21:13:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:38.413 21:13:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:38.413 21:13:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:38.413 21:13:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.413 21:13:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.672 21:13:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:38.672 "name": "Existed_Raid", 00:16:38.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.672 "strip_size_kb": 64, 00:16:38.672 "state": "configuring", 00:16:38.672 "raid_level": "raid0", 00:16:38.672 "superblock": false, 00:16:38.672 "num_base_bdevs": 4, 00:16:38.672 "num_base_bdevs_discovered": 2, 00:16:38.672 "num_base_bdevs_operational": 4, 00:16:38.672 "base_bdevs_list": [ 00:16:38.672 { 00:16:38.672 "name": "BaseBdev1", 00:16:38.672 "uuid": "40ce6d23-ae73-45f0-a734-9e1fe7ba47c4", 00:16:38.672 "is_configured": true, 00:16:38.672 "data_offset": 0, 00:16:38.672 "data_size": 65536 00:16:38.672 }, 00:16:38.672 { 00:16:38.672 "name": "BaseBdev2", 00:16:38.672 "uuid": "3edcf972-cfcd-4a59-814c-d8e326654a41", 00:16:38.672 "is_configured": true, 00:16:38.672 "data_offset": 0, 00:16:38.672 "data_size": 65536 00:16:38.672 }, 00:16:38.672 { 00:16:38.672 "name": "BaseBdev3", 00:16:38.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.672 "is_configured": false, 00:16:38.672 "data_offset": 0, 00:16:38.672 "data_size": 0 00:16:38.672 }, 00:16:38.672 { 00:16:38.672 "name": "BaseBdev4", 00:16:38.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.672 "is_configured": false, 00:16:38.672 "data_offset": 0, 00:16:38.672 "data_size": 0 00:16:38.672 } 00:16:38.672 ] 00:16:38.672 }' 00:16:38.672 21:13:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:38.672 21:13:01 -- common/autotest_common.sh@10 -- # set +x 00:16:39.239 21:13:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:39.498 [2024-06-07 21:13:02.110271] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:39.498 BaseBdev3 00:16:39.498 21:13:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:39.498 21:13:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:39.498 21:13:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:39.498 21:13:02 -- common/autotest_common.sh@889 -- # local i 00:16:39.498 21:13:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:39.498 21:13:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:39.498 21:13:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:39.756 21:13:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:40.014 [ 00:16:40.014 { 00:16:40.014 "name": "BaseBdev3", 00:16:40.014 "aliases": [ 00:16:40.014 "c16ec73f-4491-438a-9177-ac71be5bff2a" 00:16:40.014 ], 00:16:40.014 "product_name": "Malloc disk", 00:16:40.014 "block_size": 512, 00:16:40.014 "num_blocks": 65536, 00:16:40.014 "uuid": "c16ec73f-4491-438a-9177-ac71be5bff2a", 00:16:40.014 "assigned_rate_limits": { 00:16:40.014 "rw_ios_per_sec": 0, 00:16:40.014 "rw_mbytes_per_sec": 0, 00:16:40.014 "r_mbytes_per_sec": 0, 00:16:40.014 "w_mbytes_per_sec": 0 00:16:40.014 }, 00:16:40.014 "claimed": true, 00:16:40.014 "claim_type": "exclusive_write", 00:16:40.014 "zoned": false, 00:16:40.014 "supported_io_types": { 00:16:40.014 "read": true, 00:16:40.014 "write": true, 00:16:40.014 "unmap": true, 00:16:40.014 "write_zeroes": true, 00:16:40.014 "flush": true, 00:16:40.014 "reset": true, 00:16:40.014 "compare": false, 00:16:40.014 "compare_and_write": false, 00:16:40.014 "abort": true, 00:16:40.014 "nvme_admin": false, 00:16:40.014 "nvme_io": false 00:16:40.014 }, 00:16:40.014 "memory_domains": [ 00:16:40.014 { 00:16:40.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.014 "dma_device_type": 2 00:16:40.014 } 00:16:40.014 ], 00:16:40.014 "driver_specific": {} 00:16:40.014 } 00:16:40.014 ] 00:16:40.014 21:13:02 -- common/autotest_common.sh@895 -- # return 0 00:16:40.014 21:13:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:40.014 21:13:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:40.014 21:13:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:40.014 21:13:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:40.014 21:13:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:40.014 21:13:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:40.015 21:13:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:40.015 21:13:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:40.015 21:13:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:40.015 21:13:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:40.015 21:13:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:40.015 21:13:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:40.015 21:13:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.015 21:13:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.273 21:13:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:40.273 "name": "Existed_Raid", 00:16:40.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.273 "strip_size_kb": 64, 00:16:40.273 "state": "configuring", 00:16:40.273 "raid_level": "raid0", 00:16:40.273 "superblock": false, 00:16:40.273 "num_base_bdevs": 4, 00:16:40.273 "num_base_bdevs_discovered": 3, 00:16:40.273 "num_base_bdevs_operational": 4, 00:16:40.273 "base_bdevs_list": [ 00:16:40.273 { 00:16:40.273 "name": "BaseBdev1", 00:16:40.273 "uuid": "40ce6d23-ae73-45f0-a734-9e1fe7ba47c4", 00:16:40.273 "is_configured": true, 00:16:40.273 "data_offset": 0, 00:16:40.273 "data_size": 65536 00:16:40.273 }, 00:16:40.273 { 00:16:40.273 "name": "BaseBdev2", 00:16:40.273 "uuid": "3edcf972-cfcd-4a59-814c-d8e326654a41", 00:16:40.273 "is_configured": true, 00:16:40.273 "data_offset": 0, 00:16:40.273 "data_size": 65536 00:16:40.273 }, 00:16:40.273 { 00:16:40.273 "name": "BaseBdev3", 00:16:40.273 "uuid": "c16ec73f-4491-438a-9177-ac71be5bff2a", 00:16:40.273 "is_configured": true, 00:16:40.273 "data_offset": 0, 00:16:40.273 "data_size": 65536 00:16:40.273 }, 00:16:40.273 { 00:16:40.273 "name": "BaseBdev4", 00:16:40.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.273 "is_configured": false, 00:16:40.273 "data_offset": 0, 00:16:40.273 "data_size": 0 00:16:40.273 } 00:16:40.273 ] 00:16:40.273 }' 00:16:40.273 21:13:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:40.273 21:13:02 -- common/autotest_common.sh@10 -- # set +x 00:16:40.840 21:13:03 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:41.099 [2024-06-07 21:13:03.752002] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:41.099 [2024-06-07 21:13:03.752274] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:41.099 [2024-06-07 21:13:03.752326] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:41.099 [2024-06-07 21:13:03.752639] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:41.099 [2024-06-07 21:13:03.753229] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:41.099 [2024-06-07 21:13:03.753376] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:41.099 [2024-06-07 21:13:03.753740] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.099 BaseBdev4 00:16:41.099 21:13:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:16:41.099 21:13:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:16:41.099 21:13:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:41.099 21:13:03 -- common/autotest_common.sh@889 -- # local i 00:16:41.099 21:13:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:41.099 21:13:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:41.099 21:13:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.358 21:13:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:41.617 [ 00:16:41.617 { 00:16:41.617 "name": "BaseBdev4", 00:16:41.617 "aliases": [ 00:16:41.617 "cf726e44-c896-4938-a7d8-59b53f7c8bb0" 00:16:41.617 ], 00:16:41.617 "product_name": "Malloc disk", 00:16:41.617 "block_size": 512, 00:16:41.617 "num_blocks": 65536, 00:16:41.617 "uuid": "cf726e44-c896-4938-a7d8-59b53f7c8bb0", 00:16:41.617 "assigned_rate_limits": { 00:16:41.617 "rw_ios_per_sec": 0, 00:16:41.617 "rw_mbytes_per_sec": 0, 00:16:41.617 "r_mbytes_per_sec": 0, 00:16:41.617 "w_mbytes_per_sec": 0 00:16:41.617 }, 00:16:41.617 "claimed": true, 00:16:41.617 "claim_type": "exclusive_write", 00:16:41.617 "zoned": false, 00:16:41.617 "supported_io_types": { 00:16:41.617 "read": true, 00:16:41.617 "write": true, 00:16:41.617 "unmap": true, 00:16:41.617 "write_zeroes": true, 00:16:41.617 "flush": true, 00:16:41.617 "reset": true, 00:16:41.617 "compare": false, 00:16:41.617 "compare_and_write": false, 00:16:41.617 "abort": true, 00:16:41.617 "nvme_admin": false, 00:16:41.617 "nvme_io": false 00:16:41.617 }, 00:16:41.617 "memory_domains": [ 00:16:41.617 { 00:16:41.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.617 "dma_device_type": 2 00:16:41.617 } 00:16:41.617 ], 00:16:41.617 "driver_specific": {} 00:16:41.617 } 00:16:41.617 ] 00:16:41.617 21:13:04 -- common/autotest_common.sh@895 -- # return 0 00:16:41.617 21:13:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:41.617 21:13:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:41.617 21:13:04 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:41.617 21:13:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:41.617 21:13:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:41.617 21:13:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:41.617 21:13:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:41.617 21:13:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:41.617 21:13:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.617 21:13:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.617 21:13:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.617 21:13:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.617 21:13:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.617 21:13:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.876 21:13:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.876 "name": "Existed_Raid", 00:16:41.876 "uuid": "43ac086e-1449-4fbf-99dc-554041f5c080", 00:16:41.876 "strip_size_kb": 64, 00:16:41.876 "state": "online", 00:16:41.876 "raid_level": "raid0", 00:16:41.876 "superblock": false, 00:16:41.876 "num_base_bdevs": 4, 00:16:41.876 "num_base_bdevs_discovered": 4, 00:16:41.876 "num_base_bdevs_operational": 4, 00:16:41.876 "base_bdevs_list": [ 00:16:41.876 { 00:16:41.876 "name": "BaseBdev1", 00:16:41.876 "uuid": "40ce6d23-ae73-45f0-a734-9e1fe7ba47c4", 00:16:41.876 "is_configured": true, 00:16:41.876 "data_offset": 0, 00:16:41.876 "data_size": 65536 00:16:41.876 }, 00:16:41.876 { 00:16:41.876 "name": "BaseBdev2", 00:16:41.876 "uuid": "3edcf972-cfcd-4a59-814c-d8e326654a41", 00:16:41.876 "is_configured": true, 00:16:41.876 "data_offset": 0, 00:16:41.876 "data_size": 65536 00:16:41.876 }, 00:16:41.876 { 00:16:41.876 "name": "BaseBdev3", 00:16:41.876 "uuid": "c16ec73f-4491-438a-9177-ac71be5bff2a", 00:16:41.876 "is_configured": true, 00:16:41.876 "data_offset": 0, 00:16:41.876 "data_size": 65536 00:16:41.876 }, 00:16:41.876 { 00:16:41.876 "name": "BaseBdev4", 00:16:41.876 "uuid": "cf726e44-c896-4938-a7d8-59b53f7c8bb0", 00:16:41.876 "is_configured": true, 00:16:41.876 "data_offset": 0, 00:16:41.876 "data_size": 65536 00:16:41.876 } 00:16:41.876 ] 00:16:41.876 }' 00:16:41.876 21:13:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.876 21:13:04 -- common/autotest_common.sh@10 -- # set +x 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:42.811 [2024-06-07 21:13:05.348567] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.811 [2024-06-07 21:13:05.348811] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.811 [2024-06-07 21:13:05.349030] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.811 21:13:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.069 21:13:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:43.069 "name": "Existed_Raid", 00:16:43.069 "uuid": "43ac086e-1449-4fbf-99dc-554041f5c080", 00:16:43.069 "strip_size_kb": 64, 00:16:43.069 "state": "offline", 00:16:43.069 "raid_level": "raid0", 00:16:43.069 "superblock": false, 00:16:43.069 "num_base_bdevs": 4, 00:16:43.069 "num_base_bdevs_discovered": 3, 00:16:43.069 "num_base_bdevs_operational": 3, 00:16:43.069 "base_bdevs_list": [ 00:16:43.069 { 00:16:43.069 "name": null, 00:16:43.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.069 "is_configured": false, 00:16:43.069 "data_offset": 0, 00:16:43.069 "data_size": 65536 00:16:43.069 }, 00:16:43.069 { 00:16:43.069 "name": "BaseBdev2", 00:16:43.069 "uuid": "3edcf972-cfcd-4a59-814c-d8e326654a41", 00:16:43.069 "is_configured": true, 00:16:43.069 "data_offset": 0, 00:16:43.069 "data_size": 65536 00:16:43.069 }, 00:16:43.069 { 00:16:43.069 "name": "BaseBdev3", 00:16:43.069 "uuid": "c16ec73f-4491-438a-9177-ac71be5bff2a", 00:16:43.069 "is_configured": true, 00:16:43.069 "data_offset": 0, 00:16:43.069 "data_size": 65536 00:16:43.069 }, 00:16:43.069 { 00:16:43.069 "name": "BaseBdev4", 00:16:43.069 "uuid": "cf726e44-c896-4938-a7d8-59b53f7c8bb0", 00:16:43.069 "is_configured": true, 00:16:43.069 "data_offset": 0, 00:16:43.069 "data_size": 65536 00:16:43.069 } 00:16:43.069 ] 00:16:43.070 }' 00:16:43.070 21:13:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:43.070 21:13:05 -- common/autotest_common.sh@10 -- # set +x 00:16:44.004 21:13:06 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:44.005 21:13:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:44.005 21:13:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.005 21:13:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:44.005 21:13:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:44.005 21:13:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:44.005 21:13:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:44.263 [2024-06-07 21:13:06.827503] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:44.263 21:13:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:44.263 21:13:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:44.263 21:13:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.263 21:13:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:44.522 21:13:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:44.522 21:13:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:44.522 21:13:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:44.780 [2024-06-07 21:13:07.302584] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:44.780 21:13:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:44.780 21:13:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:44.780 21:13:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.780 21:13:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:45.039 21:13:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:45.039 21:13:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:45.039 21:13:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:45.297 [2024-06-07 21:13:07.820824] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:45.298 [2024-06-07 21:13:07.821077] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:45.298 21:13:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:45.298 21:13:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:45.298 21:13:07 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.298 21:13:07 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:45.557 21:13:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:45.557 21:13:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:45.557 21:13:08 -- bdev/bdev_raid.sh@287 -- # killprocess 131755 00:16:45.557 21:13:08 -- common/autotest_common.sh@926 -- # '[' -z 131755 ']' 00:16:45.557 21:13:08 -- common/autotest_common.sh@930 -- # kill -0 131755 00:16:45.557 21:13:08 -- common/autotest_common.sh@931 -- # uname 00:16:45.557 21:13:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:45.557 21:13:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131755 00:16:45.557 killing process with pid 131755 00:16:45.557 21:13:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:45.557 21:13:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:45.557 21:13:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131755' 00:16:45.557 21:13:08 -- common/autotest_common.sh@945 -- # kill 131755 00:16:45.557 21:13:08 -- common/autotest_common.sh@950 -- # wait 131755 00:16:45.557 [2024-06-07 21:13:08.070615] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:45.557 [2024-06-07 21:13:08.070700] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:45.816 ************************************ 00:16:45.816 END TEST raid_state_function_test 00:16:45.816 ************************************ 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:45.816 00:16:45.816 real 0m13.598s 00:16:45.816 user 0m25.469s 00:16:45.816 sys 0m1.481s 00:16:45.816 21:13:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.816 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:16:45.816 21:13:08 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:45.816 21:13:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:45.816 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:16:45.816 ************************************ 00:16:45.816 START TEST raid_state_function_test_sb 00:16:45.816 ************************************ 00:16:45.816 21:13:08 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:45.816 Process raid pid: 132205 00:16:45.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@226 -- # raid_pid=132205 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 132205' 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@228 -- # waitforlisten 132205 /var/tmp/spdk-raid.sock 00:16:45.816 21:13:08 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:45.816 21:13:08 -- common/autotest_common.sh@819 -- # '[' -z 132205 ']' 00:16:45.816 21:13:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:45.816 21:13:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:45.816 21:13:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:45.816 21:13:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:45.816 21:13:08 -- common/autotest_common.sh@10 -- # set +x 00:16:45.816 [2024-06-07 21:13:08.439551] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:45.816 [2024-06-07 21:13:08.439773] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.075 [2024-06-07 21:13:08.604835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.075 [2024-06-07 21:13:08.677802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.075 [2024-06-07 21:13:08.733096] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.050 21:13:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:47.050 21:13:09 -- common/autotest_common.sh@852 -- # return 0 00:16:47.050 21:13:09 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:47.050 [2024-06-07 21:13:09.551956] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:47.050 [2024-06-07 21:13:09.552052] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:47.050 [2024-06-07 21:13:09.552098] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.050 [2024-06-07 21:13:09.552123] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.050 [2024-06-07 21:13:09.552130] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:47.050 [2024-06-07 21:13:09.552169] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:47.050 [2024-06-07 21:13:09.552177] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:47.050 [2024-06-07 21:13:09.552200] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:47.050 21:13:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:47.050 21:13:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:47.050 21:13:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:47.050 21:13:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:47.050 21:13:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:47.050 21:13:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:47.050 21:13:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:47.050 21:13:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:47.050 21:13:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:47.050 21:13:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:47.050 21:13:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.050 21:13:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.309 21:13:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.309 "name": "Existed_Raid", 00:16:47.309 "uuid": "1341f354-9c44-441e-8a3b-e576cd75ec9c", 00:16:47.309 "strip_size_kb": 64, 00:16:47.309 "state": "configuring", 00:16:47.309 "raid_level": "raid0", 00:16:47.309 "superblock": true, 00:16:47.309 "num_base_bdevs": 4, 00:16:47.309 "num_base_bdevs_discovered": 0, 00:16:47.309 "num_base_bdevs_operational": 4, 00:16:47.309 "base_bdevs_list": [ 00:16:47.309 { 00:16:47.309 "name": "BaseBdev1", 00:16:47.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.309 "is_configured": false, 00:16:47.309 "data_offset": 0, 00:16:47.309 "data_size": 0 00:16:47.309 }, 00:16:47.309 { 00:16:47.309 "name": "BaseBdev2", 00:16:47.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.309 "is_configured": false, 00:16:47.309 "data_offset": 0, 00:16:47.309 "data_size": 0 00:16:47.309 }, 00:16:47.309 { 00:16:47.309 "name": "BaseBdev3", 00:16:47.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.309 "is_configured": false, 00:16:47.309 "data_offset": 0, 00:16:47.309 "data_size": 0 00:16:47.309 }, 00:16:47.309 { 00:16:47.309 "name": "BaseBdev4", 00:16:47.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.309 "is_configured": false, 00:16:47.309 "data_offset": 0, 00:16:47.309 "data_size": 0 00:16:47.309 } 00:16:47.309 ] 00:16:47.309 }' 00:16:47.309 21:13:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.309 21:13:09 -- common/autotest_common.sh@10 -- # set +x 00:16:47.877 21:13:10 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:48.135 [2024-06-07 21:13:10.716034] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:48.135 [2024-06-07 21:13:10.716110] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:48.135 21:13:10 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:48.394 [2024-06-07 21:13:10.968105] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:48.394 [2024-06-07 21:13:10.968179] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:48.394 [2024-06-07 21:13:10.968206] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:48.394 [2024-06-07 21:13:10.968238] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:48.394 [2024-06-07 21:13:10.968246] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:48.394 [2024-06-07 21:13:10.968281] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:48.394 [2024-06-07 21:13:10.968289] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:48.394 [2024-06-07 21:13:10.968311] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:48.394 21:13:10 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:48.652 [2024-06-07 21:13:11.187658] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.652 BaseBdev1 00:16:48.652 21:13:11 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:48.652 21:13:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:48.652 21:13:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:48.652 21:13:11 -- common/autotest_common.sh@889 -- # local i 00:16:48.652 21:13:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:48.652 21:13:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:48.652 21:13:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.910 21:13:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:49.169 [ 00:16:49.169 { 00:16:49.169 "name": "BaseBdev1", 00:16:49.169 "aliases": [ 00:16:49.169 "54d5a6e9-dbc6-4fdd-8673-7ec7418f2dd7" 00:16:49.169 ], 00:16:49.169 "product_name": "Malloc disk", 00:16:49.169 "block_size": 512, 00:16:49.169 "num_blocks": 65536, 00:16:49.169 "uuid": "54d5a6e9-dbc6-4fdd-8673-7ec7418f2dd7", 00:16:49.169 "assigned_rate_limits": { 00:16:49.169 "rw_ios_per_sec": 0, 00:16:49.169 "rw_mbytes_per_sec": 0, 00:16:49.169 "r_mbytes_per_sec": 0, 00:16:49.169 "w_mbytes_per_sec": 0 00:16:49.169 }, 00:16:49.169 "claimed": true, 00:16:49.169 "claim_type": "exclusive_write", 00:16:49.169 "zoned": false, 00:16:49.169 "supported_io_types": { 00:16:49.169 "read": true, 00:16:49.169 "write": true, 00:16:49.169 "unmap": true, 00:16:49.169 "write_zeroes": true, 00:16:49.169 "flush": true, 00:16:49.169 "reset": true, 00:16:49.169 "compare": false, 00:16:49.169 "compare_and_write": false, 00:16:49.169 "abort": true, 00:16:49.169 "nvme_admin": false, 00:16:49.169 "nvme_io": false 00:16:49.169 }, 00:16:49.169 "memory_domains": [ 00:16:49.169 { 00:16:49.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.169 "dma_device_type": 2 00:16:49.169 } 00:16:49.169 ], 00:16:49.169 "driver_specific": {} 00:16:49.169 } 00:16:49.169 ] 00:16:49.169 21:13:11 -- common/autotest_common.sh@895 -- # return 0 00:16:49.169 21:13:11 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:49.169 21:13:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:49.169 21:13:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:49.169 21:13:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:49.169 21:13:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:49.169 21:13:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:49.169 21:13:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:49.169 21:13:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:49.169 21:13:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:49.169 21:13:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:49.169 21:13:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.169 21:13:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.428 21:13:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:49.428 "name": "Existed_Raid", 00:16:49.428 "uuid": "17199392-aaff-4c39-893b-ef199294dd82", 00:16:49.428 "strip_size_kb": 64, 00:16:49.428 "state": "configuring", 00:16:49.428 "raid_level": "raid0", 00:16:49.428 "superblock": true, 00:16:49.428 "num_base_bdevs": 4, 00:16:49.428 "num_base_bdevs_discovered": 1, 00:16:49.428 "num_base_bdevs_operational": 4, 00:16:49.428 "base_bdevs_list": [ 00:16:49.428 { 00:16:49.428 "name": "BaseBdev1", 00:16:49.428 "uuid": "54d5a6e9-dbc6-4fdd-8673-7ec7418f2dd7", 00:16:49.428 "is_configured": true, 00:16:49.428 "data_offset": 2048, 00:16:49.428 "data_size": 63488 00:16:49.428 }, 00:16:49.428 { 00:16:49.428 "name": "BaseBdev2", 00:16:49.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.428 "is_configured": false, 00:16:49.428 "data_offset": 0, 00:16:49.428 "data_size": 0 00:16:49.428 }, 00:16:49.428 { 00:16:49.428 "name": "BaseBdev3", 00:16:49.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.428 "is_configured": false, 00:16:49.428 "data_offset": 0, 00:16:49.428 "data_size": 0 00:16:49.428 }, 00:16:49.428 { 00:16:49.428 "name": "BaseBdev4", 00:16:49.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.428 "is_configured": false, 00:16:49.428 "data_offset": 0, 00:16:49.428 "data_size": 0 00:16:49.428 } 00:16:49.428 ] 00:16:49.428 }' 00:16:49.428 21:13:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:49.428 21:13:11 -- common/autotest_common.sh@10 -- # set +x 00:16:49.995 21:13:12 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:50.254 [2024-06-07 21:13:12.772061] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.254 [2024-06-07 21:13:12.772148] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:50.254 21:13:12 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:50.254 21:13:12 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:50.513 21:13:13 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:50.772 BaseBdev1 00:16:50.772 21:13:13 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:50.772 21:13:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:50.772 21:13:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:50.772 21:13:13 -- common/autotest_common.sh@889 -- # local i 00:16:50.772 21:13:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:50.772 21:13:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:50.772 21:13:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:51.031 21:13:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:51.289 [ 00:16:51.289 { 00:16:51.289 "name": "BaseBdev1", 00:16:51.289 "aliases": [ 00:16:51.289 "93571f45-10aa-4c02-ba82-4a75d7e43aee" 00:16:51.289 ], 00:16:51.289 "product_name": "Malloc disk", 00:16:51.289 "block_size": 512, 00:16:51.289 "num_blocks": 65536, 00:16:51.289 "uuid": "93571f45-10aa-4c02-ba82-4a75d7e43aee", 00:16:51.289 "assigned_rate_limits": { 00:16:51.289 "rw_ios_per_sec": 0, 00:16:51.289 "rw_mbytes_per_sec": 0, 00:16:51.289 "r_mbytes_per_sec": 0, 00:16:51.289 "w_mbytes_per_sec": 0 00:16:51.289 }, 00:16:51.289 "claimed": false, 00:16:51.289 "zoned": false, 00:16:51.289 "supported_io_types": { 00:16:51.290 "read": true, 00:16:51.290 "write": true, 00:16:51.290 "unmap": true, 00:16:51.290 "write_zeroes": true, 00:16:51.290 "flush": true, 00:16:51.290 "reset": true, 00:16:51.290 "compare": false, 00:16:51.290 "compare_and_write": false, 00:16:51.290 "abort": true, 00:16:51.290 "nvme_admin": false, 00:16:51.290 "nvme_io": false 00:16:51.290 }, 00:16:51.290 "memory_domains": [ 00:16:51.290 { 00:16:51.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.290 "dma_device_type": 2 00:16:51.290 } 00:16:51.290 ], 00:16:51.290 "driver_specific": {} 00:16:51.290 } 00:16:51.290 ] 00:16:51.290 21:13:13 -- common/autotest_common.sh@895 -- # return 0 00:16:51.290 21:13:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:51.549 [2024-06-07 21:13:13.984830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.549 [2024-06-07 21:13:13.986822] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.549 [2024-06-07 21:13:13.986962] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.549 [2024-06-07 21:13:13.986976] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.549 [2024-06-07 21:13:13.987002] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.549 [2024-06-07 21:13:13.987011] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:51.549 [2024-06-07 21:13:13.987028] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:51.549 21:13:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:51.549 21:13:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:51.549 21:13:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:51.549 21:13:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:51.549 21:13:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:51.549 21:13:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:51.549 21:13:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:51.549 21:13:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:51.549 21:13:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.549 21:13:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.549 21:13:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.549 21:13:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.549 21:13:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.549 21:13:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.549 21:13:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:51.549 "name": "Existed_Raid", 00:16:51.549 "uuid": "fd00ad95-4e69-487c-b01e-688f4bb2c43e", 00:16:51.549 "strip_size_kb": 64, 00:16:51.549 "state": "configuring", 00:16:51.549 "raid_level": "raid0", 00:16:51.549 "superblock": true, 00:16:51.549 "num_base_bdevs": 4, 00:16:51.549 "num_base_bdevs_discovered": 1, 00:16:51.549 "num_base_bdevs_operational": 4, 00:16:51.549 "base_bdevs_list": [ 00:16:51.549 { 00:16:51.549 "name": "BaseBdev1", 00:16:51.549 "uuid": "93571f45-10aa-4c02-ba82-4a75d7e43aee", 00:16:51.549 "is_configured": true, 00:16:51.549 "data_offset": 2048, 00:16:51.549 "data_size": 63488 00:16:51.549 }, 00:16:51.549 { 00:16:51.549 "name": "BaseBdev2", 00:16:51.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.549 "is_configured": false, 00:16:51.549 "data_offset": 0, 00:16:51.549 "data_size": 0 00:16:51.549 }, 00:16:51.549 { 00:16:51.549 "name": "BaseBdev3", 00:16:51.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.549 "is_configured": false, 00:16:51.549 "data_offset": 0, 00:16:51.549 "data_size": 0 00:16:51.549 }, 00:16:51.549 { 00:16:51.549 "name": "BaseBdev4", 00:16:51.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.549 "is_configured": false, 00:16:51.549 "data_offset": 0, 00:16:51.549 "data_size": 0 00:16:51.549 } 00:16:51.549 ] 00:16:51.549 }' 00:16:51.549 21:13:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:51.549 21:13:14 -- common/autotest_common.sh@10 -- # set +x 00:16:52.486 21:13:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:52.486 [2024-06-07 21:13:15.128550] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.486 BaseBdev2 00:16:52.486 21:13:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:52.486 21:13:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:52.486 21:13:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:52.486 21:13:15 -- common/autotest_common.sh@889 -- # local i 00:16:52.486 21:13:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:52.486 21:13:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:52.486 21:13:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:52.745 21:13:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:53.003 [ 00:16:53.003 { 00:16:53.003 "name": "BaseBdev2", 00:16:53.003 "aliases": [ 00:16:53.003 "a8e5886e-165b-416b-8257-05c91dfcf0e3" 00:16:53.003 ], 00:16:53.003 "product_name": "Malloc disk", 00:16:53.003 "block_size": 512, 00:16:53.003 "num_blocks": 65536, 00:16:53.003 "uuid": "a8e5886e-165b-416b-8257-05c91dfcf0e3", 00:16:53.003 "assigned_rate_limits": { 00:16:53.003 "rw_ios_per_sec": 0, 00:16:53.003 "rw_mbytes_per_sec": 0, 00:16:53.003 "r_mbytes_per_sec": 0, 00:16:53.003 "w_mbytes_per_sec": 0 00:16:53.003 }, 00:16:53.003 "claimed": true, 00:16:53.003 "claim_type": "exclusive_write", 00:16:53.003 "zoned": false, 00:16:53.003 "supported_io_types": { 00:16:53.003 "read": true, 00:16:53.003 "write": true, 00:16:53.003 "unmap": true, 00:16:53.003 "write_zeroes": true, 00:16:53.003 "flush": true, 00:16:53.003 "reset": true, 00:16:53.003 "compare": false, 00:16:53.003 "compare_and_write": false, 00:16:53.003 "abort": true, 00:16:53.003 "nvme_admin": false, 00:16:53.003 "nvme_io": false 00:16:53.003 }, 00:16:53.003 "memory_domains": [ 00:16:53.004 { 00:16:53.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.004 "dma_device_type": 2 00:16:53.004 } 00:16:53.004 ], 00:16:53.004 "driver_specific": {} 00:16:53.004 } 00:16:53.004 ] 00:16:53.004 21:13:15 -- common/autotest_common.sh@895 -- # return 0 00:16:53.004 21:13:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:53.004 21:13:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:53.004 21:13:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:53.004 21:13:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:53.004 21:13:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:53.004 21:13:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:53.004 21:13:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:53.004 21:13:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:53.004 21:13:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:53.004 21:13:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:53.004 21:13:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:53.004 21:13:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:53.004 21:13:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.004 21:13:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.262 21:13:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:53.262 "name": "Existed_Raid", 00:16:53.262 "uuid": "fd00ad95-4e69-487c-b01e-688f4bb2c43e", 00:16:53.262 "strip_size_kb": 64, 00:16:53.262 "state": "configuring", 00:16:53.262 "raid_level": "raid0", 00:16:53.262 "superblock": true, 00:16:53.262 "num_base_bdevs": 4, 00:16:53.262 "num_base_bdevs_discovered": 2, 00:16:53.262 "num_base_bdevs_operational": 4, 00:16:53.262 "base_bdevs_list": [ 00:16:53.262 { 00:16:53.262 "name": "BaseBdev1", 00:16:53.262 "uuid": "93571f45-10aa-4c02-ba82-4a75d7e43aee", 00:16:53.262 "is_configured": true, 00:16:53.262 "data_offset": 2048, 00:16:53.262 "data_size": 63488 00:16:53.262 }, 00:16:53.262 { 00:16:53.262 "name": "BaseBdev2", 00:16:53.262 "uuid": "a8e5886e-165b-416b-8257-05c91dfcf0e3", 00:16:53.262 "is_configured": true, 00:16:53.262 "data_offset": 2048, 00:16:53.262 "data_size": 63488 00:16:53.262 }, 00:16:53.262 { 00:16:53.262 "name": "BaseBdev3", 00:16:53.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.262 "is_configured": false, 00:16:53.262 "data_offset": 0, 00:16:53.262 "data_size": 0 00:16:53.262 }, 00:16:53.262 { 00:16:53.262 "name": "BaseBdev4", 00:16:53.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.262 "is_configured": false, 00:16:53.262 "data_offset": 0, 00:16:53.262 "data_size": 0 00:16:53.262 } 00:16:53.262 ] 00:16:53.262 }' 00:16:53.262 21:13:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:53.262 21:13:15 -- common/autotest_common.sh@10 -- # set +x 00:16:54.214 21:13:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:54.214 [2024-06-07 21:13:16.705870] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:54.214 BaseBdev3 00:16:54.214 21:13:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:54.214 21:13:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:54.214 21:13:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:54.214 21:13:16 -- common/autotest_common.sh@889 -- # local i 00:16:54.214 21:13:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:54.214 21:13:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:54.214 21:13:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:54.472 21:13:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:54.731 [ 00:16:54.731 { 00:16:54.731 "name": "BaseBdev3", 00:16:54.731 "aliases": [ 00:16:54.731 "d3e8538a-c96b-449f-a1a3-527ac94a4761" 00:16:54.731 ], 00:16:54.731 "product_name": "Malloc disk", 00:16:54.731 "block_size": 512, 00:16:54.731 "num_blocks": 65536, 00:16:54.731 "uuid": "d3e8538a-c96b-449f-a1a3-527ac94a4761", 00:16:54.731 "assigned_rate_limits": { 00:16:54.731 "rw_ios_per_sec": 0, 00:16:54.731 "rw_mbytes_per_sec": 0, 00:16:54.731 "r_mbytes_per_sec": 0, 00:16:54.731 "w_mbytes_per_sec": 0 00:16:54.731 }, 00:16:54.731 "claimed": true, 00:16:54.731 "claim_type": "exclusive_write", 00:16:54.731 "zoned": false, 00:16:54.731 "supported_io_types": { 00:16:54.731 "read": true, 00:16:54.731 "write": true, 00:16:54.731 "unmap": true, 00:16:54.731 "write_zeroes": true, 00:16:54.731 "flush": true, 00:16:54.731 "reset": true, 00:16:54.731 "compare": false, 00:16:54.731 "compare_and_write": false, 00:16:54.731 "abort": true, 00:16:54.731 "nvme_admin": false, 00:16:54.731 "nvme_io": false 00:16:54.731 }, 00:16:54.731 "memory_domains": [ 00:16:54.731 { 00:16:54.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.731 "dma_device_type": 2 00:16:54.731 } 00:16:54.731 ], 00:16:54.731 "driver_specific": {} 00:16:54.731 } 00:16:54.731 ] 00:16:54.731 21:13:17 -- common/autotest_common.sh@895 -- # return 0 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:54.731 "name": "Existed_Raid", 00:16:54.731 "uuid": "fd00ad95-4e69-487c-b01e-688f4bb2c43e", 00:16:54.731 "strip_size_kb": 64, 00:16:54.731 "state": "configuring", 00:16:54.731 "raid_level": "raid0", 00:16:54.731 "superblock": true, 00:16:54.731 "num_base_bdevs": 4, 00:16:54.731 "num_base_bdevs_discovered": 3, 00:16:54.731 "num_base_bdevs_operational": 4, 00:16:54.731 "base_bdevs_list": [ 00:16:54.731 { 00:16:54.731 "name": "BaseBdev1", 00:16:54.731 "uuid": "93571f45-10aa-4c02-ba82-4a75d7e43aee", 00:16:54.731 "is_configured": true, 00:16:54.731 "data_offset": 2048, 00:16:54.731 "data_size": 63488 00:16:54.731 }, 00:16:54.731 { 00:16:54.731 "name": "BaseBdev2", 00:16:54.731 "uuid": "a8e5886e-165b-416b-8257-05c91dfcf0e3", 00:16:54.731 "is_configured": true, 00:16:54.731 "data_offset": 2048, 00:16:54.731 "data_size": 63488 00:16:54.731 }, 00:16:54.731 { 00:16:54.731 "name": "BaseBdev3", 00:16:54.731 "uuid": "d3e8538a-c96b-449f-a1a3-527ac94a4761", 00:16:54.731 "is_configured": true, 00:16:54.731 "data_offset": 2048, 00:16:54.731 "data_size": 63488 00:16:54.731 }, 00:16:54.731 { 00:16:54.731 "name": "BaseBdev4", 00:16:54.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.731 "is_configured": false, 00:16:54.731 "data_offset": 0, 00:16:54.731 "data_size": 0 00:16:54.731 } 00:16:54.731 ] 00:16:54.731 }' 00:16:54.731 21:13:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:54.731 21:13:17 -- common/autotest_common.sh@10 -- # set +x 00:16:55.666 21:13:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:55.924 [2024-06-07 21:13:18.363210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:55.924 [2024-06-07 21:13:18.363487] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:55.924 [2024-06-07 21:13:18.363501] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:55.924 [2024-06-07 21:13:18.363676] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:55.924 BaseBdev4 00:16:55.924 [2024-06-07 21:13:18.364113] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:55.924 [2024-06-07 21:13:18.364152] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:55.924 [2024-06-07 21:13:18.364342] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.924 21:13:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:16:55.924 21:13:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:16:55.924 21:13:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:55.924 21:13:18 -- common/autotest_common.sh@889 -- # local i 00:16:55.924 21:13:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:55.924 21:13:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:55.924 21:13:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.182 21:13:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:56.182 [ 00:16:56.182 { 00:16:56.182 "name": "BaseBdev4", 00:16:56.182 "aliases": [ 00:16:56.182 "124bebd2-21bf-4791-8274-dfe1ec95f133" 00:16:56.182 ], 00:16:56.182 "product_name": "Malloc disk", 00:16:56.182 "block_size": 512, 00:16:56.182 "num_blocks": 65536, 00:16:56.182 "uuid": "124bebd2-21bf-4791-8274-dfe1ec95f133", 00:16:56.182 "assigned_rate_limits": { 00:16:56.182 "rw_ios_per_sec": 0, 00:16:56.182 "rw_mbytes_per_sec": 0, 00:16:56.182 "r_mbytes_per_sec": 0, 00:16:56.182 "w_mbytes_per_sec": 0 00:16:56.182 }, 00:16:56.182 "claimed": true, 00:16:56.182 "claim_type": "exclusive_write", 00:16:56.182 "zoned": false, 00:16:56.182 "supported_io_types": { 00:16:56.182 "read": true, 00:16:56.182 "write": true, 00:16:56.182 "unmap": true, 00:16:56.182 "write_zeroes": true, 00:16:56.182 "flush": true, 00:16:56.182 "reset": true, 00:16:56.182 "compare": false, 00:16:56.182 "compare_and_write": false, 00:16:56.182 "abort": true, 00:16:56.182 "nvme_admin": false, 00:16:56.182 "nvme_io": false 00:16:56.182 }, 00:16:56.182 "memory_domains": [ 00:16:56.182 { 00:16:56.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.182 "dma_device_type": 2 00:16:56.182 } 00:16:56.182 ], 00:16:56.182 "driver_specific": {} 00:16:56.182 } 00:16:56.182 ] 00:16:56.182 21:13:18 -- common/autotest_common.sh@895 -- # return 0 00:16:56.182 21:13:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:56.182 21:13:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:56.182 21:13:18 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:56.182 21:13:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:56.182 21:13:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:56.183 21:13:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:56.183 21:13:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:56.183 21:13:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:56.183 21:13:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.183 21:13:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.183 21:13:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.183 21:13:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.183 21:13:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.183 21:13:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.441 21:13:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:56.441 "name": "Existed_Raid", 00:16:56.441 "uuid": "fd00ad95-4e69-487c-b01e-688f4bb2c43e", 00:16:56.441 "strip_size_kb": 64, 00:16:56.441 "state": "online", 00:16:56.441 "raid_level": "raid0", 00:16:56.441 "superblock": true, 00:16:56.441 "num_base_bdevs": 4, 00:16:56.441 "num_base_bdevs_discovered": 4, 00:16:56.441 "num_base_bdevs_operational": 4, 00:16:56.441 "base_bdevs_list": [ 00:16:56.441 { 00:16:56.441 "name": "BaseBdev1", 00:16:56.441 "uuid": "93571f45-10aa-4c02-ba82-4a75d7e43aee", 00:16:56.441 "is_configured": true, 00:16:56.441 "data_offset": 2048, 00:16:56.441 "data_size": 63488 00:16:56.441 }, 00:16:56.441 { 00:16:56.441 "name": "BaseBdev2", 00:16:56.441 "uuid": "a8e5886e-165b-416b-8257-05c91dfcf0e3", 00:16:56.441 "is_configured": true, 00:16:56.441 "data_offset": 2048, 00:16:56.441 "data_size": 63488 00:16:56.441 }, 00:16:56.441 { 00:16:56.441 "name": "BaseBdev3", 00:16:56.441 "uuid": "d3e8538a-c96b-449f-a1a3-527ac94a4761", 00:16:56.441 "is_configured": true, 00:16:56.441 "data_offset": 2048, 00:16:56.441 "data_size": 63488 00:16:56.441 }, 00:16:56.441 { 00:16:56.441 "name": "BaseBdev4", 00:16:56.441 "uuid": "124bebd2-21bf-4791-8274-dfe1ec95f133", 00:16:56.441 "is_configured": true, 00:16:56.441 "data_offset": 2048, 00:16:56.441 "data_size": 63488 00:16:56.441 } 00:16:56.441 ] 00:16:56.441 }' 00:16:56.441 21:13:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:56.441 21:13:19 -- common/autotest_common.sh@10 -- # set +x 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:57.376 [2024-06-07 21:13:19.893442] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:57.376 [2024-06-07 21:13:19.893479] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.376 [2024-06-07 21:13:19.893591] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.376 21:13:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.635 21:13:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:57.635 "name": "Existed_Raid", 00:16:57.635 "uuid": "fd00ad95-4e69-487c-b01e-688f4bb2c43e", 00:16:57.635 "strip_size_kb": 64, 00:16:57.635 "state": "offline", 00:16:57.635 "raid_level": "raid0", 00:16:57.635 "superblock": true, 00:16:57.635 "num_base_bdevs": 4, 00:16:57.635 "num_base_bdevs_discovered": 3, 00:16:57.635 "num_base_bdevs_operational": 3, 00:16:57.635 "base_bdevs_list": [ 00:16:57.635 { 00:16:57.635 "name": null, 00:16:57.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.635 "is_configured": false, 00:16:57.635 "data_offset": 2048, 00:16:57.635 "data_size": 63488 00:16:57.635 }, 00:16:57.635 { 00:16:57.635 "name": "BaseBdev2", 00:16:57.635 "uuid": "a8e5886e-165b-416b-8257-05c91dfcf0e3", 00:16:57.635 "is_configured": true, 00:16:57.635 "data_offset": 2048, 00:16:57.635 "data_size": 63488 00:16:57.635 }, 00:16:57.635 { 00:16:57.635 "name": "BaseBdev3", 00:16:57.635 "uuid": "d3e8538a-c96b-449f-a1a3-527ac94a4761", 00:16:57.635 "is_configured": true, 00:16:57.635 "data_offset": 2048, 00:16:57.635 "data_size": 63488 00:16:57.635 }, 00:16:57.635 { 00:16:57.635 "name": "BaseBdev4", 00:16:57.635 "uuid": "124bebd2-21bf-4791-8274-dfe1ec95f133", 00:16:57.635 "is_configured": true, 00:16:57.635 "data_offset": 2048, 00:16:57.635 "data_size": 63488 00:16:57.635 } 00:16:57.635 ] 00:16:57.635 }' 00:16:57.635 21:13:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:57.635 21:13:20 -- common/autotest_common.sh@10 -- # set +x 00:16:58.201 21:13:20 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:58.201 21:13:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:58.201 21:13:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.201 21:13:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:58.459 21:13:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:58.459 21:13:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:58.459 21:13:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:58.716 [2024-06-07 21:13:21.243589] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:58.716 21:13:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:58.716 21:13:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:58.716 21:13:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.716 21:13:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:58.973 21:13:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:58.973 21:13:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:58.973 21:13:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:59.229 [2024-06-07 21:13:21.709901] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:59.229 21:13:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:59.229 21:13:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:59.229 21:13:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.229 21:13:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:59.487 21:13:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:59.487 21:13:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:59.487 21:13:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:59.744 [2024-06-07 21:13:22.184152] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:59.744 [2024-06-07 21:13:22.184239] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:16:59.744 21:13:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:59.744 21:13:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:59.744 21:13:22 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.744 21:13:22 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:59.744 21:13:22 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:59.744 21:13:22 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:59.744 21:13:22 -- bdev/bdev_raid.sh@287 -- # killprocess 132205 00:16:59.744 21:13:22 -- common/autotest_common.sh@926 -- # '[' -z 132205 ']' 00:16:59.744 21:13:22 -- common/autotest_common.sh@930 -- # kill -0 132205 00:16:59.744 21:13:22 -- common/autotest_common.sh@931 -- # uname 00:16:59.744 21:13:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:00.002 21:13:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132205 00:17:00.002 killing process with pid 132205 00:17:00.002 21:13:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:00.002 21:13:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:00.002 21:13:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132205' 00:17:00.002 21:13:22 -- common/autotest_common.sh@945 -- # kill 132205 00:17:00.002 21:13:22 -- common/autotest_common.sh@950 -- # wait 132205 00:17:00.002 [2024-06-07 21:13:22.431336] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.002 [2024-06-07 21:13:22.431454] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.002 ************************************ 00:17:00.002 END TEST raid_state_function_test_sb 00:17:00.002 ************************************ 00:17:00.002 21:13:22 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:00.002 00:17:00.002 real 0m14.293s 00:17:00.002 user 0m26.670s 00:17:00.002 sys 0m1.716s 00:17:00.002 21:13:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.002 21:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:17:00.260 21:13:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:00.260 21:13:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:00.260 21:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:00.260 ************************************ 00:17:00.260 START TEST raid_superblock_test 00:17:00.260 ************************************ 00:17:00.260 21:13:22 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:00.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@357 -- # raid_pid=132675 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@358 -- # waitforlisten 132675 /var/tmp/spdk-raid.sock 00:17:00.260 21:13:22 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:00.260 21:13:22 -- common/autotest_common.sh@819 -- # '[' -z 132675 ']' 00:17:00.260 21:13:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:00.260 21:13:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:00.260 21:13:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:00.260 21:13:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:00.260 21:13:22 -- common/autotest_common.sh@10 -- # set +x 00:17:00.260 [2024-06-07 21:13:22.775489] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:00.260 [2024-06-07 21:13:22.775702] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132675 ] 00:17:00.260 [2024-06-07 21:13:22.932834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.518 [2024-06-07 21:13:23.014624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.518 [2024-06-07 21:13:23.069013] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.450 21:13:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:01.450 21:13:23 -- common/autotest_common.sh@852 -- # return 0 00:17:01.450 21:13:23 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:01.450 21:13:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:01.450 21:13:23 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:01.450 21:13:23 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:01.450 21:13:23 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:01.450 21:13:23 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.450 21:13:23 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.450 21:13:23 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.450 21:13:23 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:01.450 malloc1 00:17:01.450 21:13:23 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:01.709 [2024-06-07 21:13:24.214032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:01.709 [2024-06-07 21:13:24.214192] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.709 [2024-06-07 21:13:24.214232] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:01.709 [2024-06-07 21:13:24.214279] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.709 [2024-06-07 21:13:24.216816] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.709 [2024-06-07 21:13:24.216903] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:01.709 pt1 00:17:01.709 21:13:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:01.709 21:13:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:01.709 21:13:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:01.709 21:13:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:01.710 21:13:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:01.710 21:13:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.710 21:13:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.710 21:13:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.710 21:13:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:01.969 malloc2 00:17:01.969 21:13:24 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:02.227 [2024-06-07 21:13:24.701103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:02.227 [2024-06-07 21:13:24.701221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.227 [2024-06-07 21:13:24.701267] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:02.227 [2024-06-07 21:13:24.701320] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.227 [2024-06-07 21:13:24.703717] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.227 [2024-06-07 21:13:24.703779] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:02.227 pt2 00:17:02.227 21:13:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:02.227 21:13:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:02.227 21:13:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:02.227 21:13:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:02.227 21:13:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:02.227 21:13:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:02.227 21:13:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:02.227 21:13:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:02.227 21:13:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:02.485 malloc3 00:17:02.485 21:13:24 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:02.743 [2024-06-07 21:13:25.181200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:02.743 [2024-06-07 21:13:25.181322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.743 [2024-06-07 21:13:25.181366] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:02.743 [2024-06-07 21:13:25.181409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.743 [2024-06-07 21:13:25.183910] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.743 [2024-06-07 21:13:25.183980] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:02.743 pt3 00:17:02.743 21:13:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:02.743 21:13:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:02.743 21:13:25 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:17:02.743 21:13:25 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:17:02.743 21:13:25 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:02.743 21:13:25 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:02.743 21:13:25 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:02.743 21:13:25 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:02.743 21:13:25 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:02.743 malloc4 00:17:03.001 21:13:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:03.001 [2024-06-07 21:13:25.652425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:03.001 [2024-06-07 21:13:25.652580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.001 [2024-06-07 21:13:25.652628] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:03.001 [2024-06-07 21:13:25.652669] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.001 [2024-06-07 21:13:25.655171] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.001 [2024-06-07 21:13:25.655242] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:03.001 pt4 00:17:03.001 21:13:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:03.001 21:13:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:03.001 21:13:25 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:03.260 [2024-06-07 21:13:25.860576] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:03.260 [2024-06-07 21:13:25.862495] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:03.260 [2024-06-07 21:13:25.862569] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:03.260 [2024-06-07 21:13:25.862679] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:03.260 [2024-06-07 21:13:25.862911] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:17:03.260 [2024-06-07 21:13:25.862926] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:03.260 [2024-06-07 21:13:25.863110] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:03.260 [2024-06-07 21:13:25.863532] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:17:03.260 [2024-06-07 21:13:25.863556] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:17:03.260 [2024-06-07 21:13:25.863741] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.260 21:13:25 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:03.260 21:13:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:03.260 21:13:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:03.260 21:13:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:03.260 21:13:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:03.260 21:13:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:03.260 21:13:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:03.260 21:13:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:03.260 21:13:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:03.260 21:13:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:03.260 21:13:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.260 21:13:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.518 21:13:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:03.518 "name": "raid_bdev1", 00:17:03.518 "uuid": "e77dc6c5-08f9-42d3-9ecb-d2c5ea90597d", 00:17:03.518 "strip_size_kb": 64, 00:17:03.518 "state": "online", 00:17:03.518 "raid_level": "raid0", 00:17:03.518 "superblock": true, 00:17:03.518 "num_base_bdevs": 4, 00:17:03.518 "num_base_bdevs_discovered": 4, 00:17:03.518 "num_base_bdevs_operational": 4, 00:17:03.518 "base_bdevs_list": [ 00:17:03.518 { 00:17:03.518 "name": "pt1", 00:17:03.518 "uuid": "f3e2f2c0-bfb5-59f5-9226-b61a847f46cd", 00:17:03.518 "is_configured": true, 00:17:03.518 "data_offset": 2048, 00:17:03.518 "data_size": 63488 00:17:03.518 }, 00:17:03.518 { 00:17:03.518 "name": "pt2", 00:17:03.518 "uuid": "480a42ee-aefe-5d18-b337-6052203a5c0d", 00:17:03.518 "is_configured": true, 00:17:03.518 "data_offset": 2048, 00:17:03.518 "data_size": 63488 00:17:03.518 }, 00:17:03.518 { 00:17:03.518 "name": "pt3", 00:17:03.518 "uuid": "4ca06af9-a63c-5092-8e23-482f5f3c4dc3", 00:17:03.518 "is_configured": true, 00:17:03.518 "data_offset": 2048, 00:17:03.518 "data_size": 63488 00:17:03.518 }, 00:17:03.518 { 00:17:03.518 "name": "pt4", 00:17:03.518 "uuid": "e8f303fc-7628-5474-8291-db12a7440d81", 00:17:03.518 "is_configured": true, 00:17:03.518 "data_offset": 2048, 00:17:03.518 "data_size": 63488 00:17:03.518 } 00:17:03.518 ] 00:17:03.518 }' 00:17:03.518 21:13:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:03.518 21:13:26 -- common/autotest_common.sh@10 -- # set +x 00:17:04.085 21:13:26 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:04.085 21:13:26 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:04.344 [2024-06-07 21:13:26.933712] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.344 21:13:26 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e77dc6c5-08f9-42d3-9ecb-d2c5ea90597d 00:17:04.344 21:13:26 -- bdev/bdev_raid.sh@380 -- # '[' -z e77dc6c5-08f9-42d3-9ecb-d2c5ea90597d ']' 00:17:04.344 21:13:26 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:04.602 [2024-06-07 21:13:27.193531] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.602 [2024-06-07 21:13:27.193564] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.602 [2024-06-07 21:13:27.193690] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.602 [2024-06-07 21:13:27.193779] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.602 [2024-06-07 21:13:27.193792] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:17:04.602 21:13:27 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.602 21:13:27 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:04.860 21:13:27 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:04.861 21:13:27 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:04.861 21:13:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:04.861 21:13:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:05.119 21:13:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:05.119 21:13:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:05.378 21:13:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:05.378 21:13:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:05.636 21:13:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:05.636 21:13:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:05.895 21:13:28 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:05.895 21:13:28 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:06.153 21:13:28 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:06.153 21:13:28 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:06.153 21:13:28 -- common/autotest_common.sh@640 -- # local es=0 00:17:06.153 21:13:28 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:06.154 21:13:28 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:06.154 21:13:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:06.154 21:13:28 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:06.154 21:13:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:06.154 21:13:28 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:06.154 21:13:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:06.154 21:13:28 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:06.154 21:13:28 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:06.154 21:13:28 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:06.154 [2024-06-07 21:13:28.817868] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:06.154 [2024-06-07 21:13:28.820023] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:06.154 [2024-06-07 21:13:28.820094] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:06.154 [2024-06-07 21:13:28.820138] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:06.154 [2024-06-07 21:13:28.820191] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:06.154 [2024-06-07 21:13:28.820301] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:06.154 [2024-06-07 21:13:28.820354] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:06.154 [2024-06-07 21:13:28.820409] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:06.154 [2024-06-07 21:13:28.820435] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.154 [2024-06-07 21:13:28.820445] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:17:06.154 request: 00:17:06.154 { 00:17:06.154 "name": "raid_bdev1", 00:17:06.154 "raid_level": "raid0", 00:17:06.154 "base_bdevs": [ 00:17:06.154 "malloc1", 00:17:06.154 "malloc2", 00:17:06.154 "malloc3", 00:17:06.154 "malloc4" 00:17:06.154 ], 00:17:06.154 "superblock": false, 00:17:06.154 "strip_size_kb": 64, 00:17:06.154 "method": "bdev_raid_create", 00:17:06.154 "req_id": 1 00:17:06.154 } 00:17:06.154 Got JSON-RPC error response 00:17:06.154 response: 00:17:06.154 { 00:17:06.154 "code": -17, 00:17:06.154 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:06.154 } 00:17:06.412 21:13:28 -- common/autotest_common.sh@643 -- # es=1 00:17:06.412 21:13:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:06.412 21:13:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:06.412 21:13:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:06.412 21:13:28 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.412 21:13:28 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:06.671 21:13:29 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:06.671 21:13:29 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:06.671 21:13:29 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:06.671 [2024-06-07 21:13:29.313926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:06.671 [2024-06-07 21:13:29.314022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.671 [2024-06-07 21:13:29.314054] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:06.671 [2024-06-07 21:13:29.314080] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.671 [2024-06-07 21:13:29.316444] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.671 [2024-06-07 21:13:29.316522] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:06.671 [2024-06-07 21:13:29.316637] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:06.671 [2024-06-07 21:13:29.316714] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:06.671 pt1 00:17:06.671 21:13:29 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:06.671 21:13:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:06.671 21:13:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:06.671 21:13:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:06.671 21:13:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:06.671 21:13:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:06.671 21:13:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:06.671 21:13:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:06.671 21:13:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:06.671 21:13:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:06.671 21:13:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.671 21:13:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.239 21:13:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:07.239 "name": "raid_bdev1", 00:17:07.239 "uuid": "e77dc6c5-08f9-42d3-9ecb-d2c5ea90597d", 00:17:07.239 "strip_size_kb": 64, 00:17:07.239 "state": "configuring", 00:17:07.239 "raid_level": "raid0", 00:17:07.239 "superblock": true, 00:17:07.239 "num_base_bdevs": 4, 00:17:07.239 "num_base_bdevs_discovered": 1, 00:17:07.239 "num_base_bdevs_operational": 4, 00:17:07.239 "base_bdevs_list": [ 00:17:07.239 { 00:17:07.239 "name": "pt1", 00:17:07.239 "uuid": "f3e2f2c0-bfb5-59f5-9226-b61a847f46cd", 00:17:07.239 "is_configured": true, 00:17:07.239 "data_offset": 2048, 00:17:07.239 "data_size": 63488 00:17:07.239 }, 00:17:07.239 { 00:17:07.239 "name": null, 00:17:07.239 "uuid": "480a42ee-aefe-5d18-b337-6052203a5c0d", 00:17:07.239 "is_configured": false, 00:17:07.239 "data_offset": 2048, 00:17:07.239 "data_size": 63488 00:17:07.239 }, 00:17:07.239 { 00:17:07.239 "name": null, 00:17:07.239 "uuid": "4ca06af9-a63c-5092-8e23-482f5f3c4dc3", 00:17:07.239 "is_configured": false, 00:17:07.239 "data_offset": 2048, 00:17:07.239 "data_size": 63488 00:17:07.239 }, 00:17:07.239 { 00:17:07.239 "name": null, 00:17:07.239 "uuid": "e8f303fc-7628-5474-8291-db12a7440d81", 00:17:07.239 "is_configured": false, 00:17:07.239 "data_offset": 2048, 00:17:07.239 "data_size": 63488 00:17:07.239 } 00:17:07.239 ] 00:17:07.239 }' 00:17:07.239 21:13:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:07.239 21:13:29 -- common/autotest_common.sh@10 -- # set +x 00:17:07.806 21:13:30 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:07.806 21:13:30 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:08.095 [2024-06-07 21:13:30.578261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:08.095 [2024-06-07 21:13:30.578365] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.096 [2024-06-07 21:13:30.578409] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:08.096 [2024-06-07 21:13:30.578432] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.096 [2024-06-07 21:13:30.578917] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.096 [2024-06-07 21:13:30.579003] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:08.096 [2024-06-07 21:13:30.579095] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:08.096 [2024-06-07 21:13:30.579125] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.096 pt2 00:17:08.096 21:13:30 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:08.369 [2024-06-07 21:13:30.782319] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:08.369 21:13:30 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:08.369 21:13:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:08.369 21:13:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:08.369 21:13:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:08.369 21:13:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:08.369 21:13:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:08.369 21:13:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:08.369 21:13:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:08.369 21:13:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:08.369 21:13:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:08.369 21:13:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.369 21:13:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.369 21:13:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.369 "name": "raid_bdev1", 00:17:08.369 "uuid": "e77dc6c5-08f9-42d3-9ecb-d2c5ea90597d", 00:17:08.369 "strip_size_kb": 64, 00:17:08.369 "state": "configuring", 00:17:08.369 "raid_level": "raid0", 00:17:08.369 "superblock": true, 00:17:08.369 "num_base_bdevs": 4, 00:17:08.369 "num_base_bdevs_discovered": 1, 00:17:08.369 "num_base_bdevs_operational": 4, 00:17:08.369 "base_bdevs_list": [ 00:17:08.369 { 00:17:08.369 "name": "pt1", 00:17:08.369 "uuid": "f3e2f2c0-bfb5-59f5-9226-b61a847f46cd", 00:17:08.369 "is_configured": true, 00:17:08.369 "data_offset": 2048, 00:17:08.369 "data_size": 63488 00:17:08.369 }, 00:17:08.369 { 00:17:08.369 "name": null, 00:17:08.369 "uuid": "480a42ee-aefe-5d18-b337-6052203a5c0d", 00:17:08.369 "is_configured": false, 00:17:08.369 "data_offset": 2048, 00:17:08.369 "data_size": 63488 00:17:08.369 }, 00:17:08.369 { 00:17:08.369 "name": null, 00:17:08.369 "uuid": "4ca06af9-a63c-5092-8e23-482f5f3c4dc3", 00:17:08.370 "is_configured": false, 00:17:08.370 "data_offset": 2048, 00:17:08.370 "data_size": 63488 00:17:08.370 }, 00:17:08.370 { 00:17:08.370 "name": null, 00:17:08.370 "uuid": "e8f303fc-7628-5474-8291-db12a7440d81", 00:17:08.370 "is_configured": false, 00:17:08.370 "data_offset": 2048, 00:17:08.370 "data_size": 63488 00:17:08.370 } 00:17:08.370 ] 00:17:08.370 }' 00:17:08.370 21:13:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.370 21:13:31 -- common/autotest_common.sh@10 -- # set +x 00:17:09.304 21:13:31 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:09.304 21:13:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:09.304 21:13:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:09.304 [2024-06-07 21:13:31.922595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:09.304 [2024-06-07 21:13:31.922717] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.304 [2024-06-07 21:13:31.922757] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:09.304 [2024-06-07 21:13:31.922783] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.305 [2024-06-07 21:13:31.923340] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.305 [2024-06-07 21:13:31.923417] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:09.305 [2024-06-07 21:13:31.923529] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:09.305 [2024-06-07 21:13:31.923557] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.305 pt2 00:17:09.305 21:13:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:09.305 21:13:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:09.305 21:13:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:09.562 [2024-06-07 21:13:32.182594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:09.562 [2024-06-07 21:13:32.182687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.562 [2024-06-07 21:13:32.182716] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:09.562 [2024-06-07 21:13:32.182741] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.562 [2024-06-07 21:13:32.183218] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.562 [2024-06-07 21:13:32.183287] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:09.562 [2024-06-07 21:13:32.183370] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:09.562 [2024-06-07 21:13:32.183395] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:09.562 pt3 00:17:09.562 21:13:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:09.563 21:13:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:09.563 21:13:32 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:09.821 [2024-06-07 21:13:32.386663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:09.821 [2024-06-07 21:13:32.386810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.821 [2024-06-07 21:13:32.386852] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:09.821 [2024-06-07 21:13:32.386881] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.821 [2024-06-07 21:13:32.387333] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.821 [2024-06-07 21:13:32.387447] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:09.821 [2024-06-07 21:13:32.387562] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:09.821 [2024-06-07 21:13:32.387591] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:09.821 [2024-06-07 21:13:32.387733] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:17:09.821 [2024-06-07 21:13:32.387756] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:09.821 [2024-06-07 21:13:32.387843] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:09.821 [2024-06-07 21:13:32.388172] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:17:09.821 [2024-06-07 21:13:32.388195] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:17:09.821 [2024-06-07 21:13:32.388302] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.821 pt4 00:17:09.821 21:13:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:09.821 21:13:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:09.821 21:13:32 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:09.821 21:13:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:09.821 21:13:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:09.821 21:13:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:09.821 21:13:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:09.821 21:13:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:09.821 21:13:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.821 21:13:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.821 21:13:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.821 21:13:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.821 21:13:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.821 21:13:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.079 21:13:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.079 "name": "raid_bdev1", 00:17:10.079 "uuid": "e77dc6c5-08f9-42d3-9ecb-d2c5ea90597d", 00:17:10.079 "strip_size_kb": 64, 00:17:10.079 "state": "online", 00:17:10.079 "raid_level": "raid0", 00:17:10.079 "superblock": true, 00:17:10.079 "num_base_bdevs": 4, 00:17:10.079 "num_base_bdevs_discovered": 4, 00:17:10.079 "num_base_bdevs_operational": 4, 00:17:10.079 "base_bdevs_list": [ 00:17:10.079 { 00:17:10.079 "name": "pt1", 00:17:10.079 "uuid": "f3e2f2c0-bfb5-59f5-9226-b61a847f46cd", 00:17:10.079 "is_configured": true, 00:17:10.079 "data_offset": 2048, 00:17:10.079 "data_size": 63488 00:17:10.079 }, 00:17:10.079 { 00:17:10.079 "name": "pt2", 00:17:10.079 "uuid": "480a42ee-aefe-5d18-b337-6052203a5c0d", 00:17:10.079 "is_configured": true, 00:17:10.079 "data_offset": 2048, 00:17:10.079 "data_size": 63488 00:17:10.079 }, 00:17:10.079 { 00:17:10.079 "name": "pt3", 00:17:10.079 "uuid": "4ca06af9-a63c-5092-8e23-482f5f3c4dc3", 00:17:10.079 "is_configured": true, 00:17:10.079 "data_offset": 2048, 00:17:10.079 "data_size": 63488 00:17:10.079 }, 00:17:10.079 { 00:17:10.079 "name": "pt4", 00:17:10.079 "uuid": "e8f303fc-7628-5474-8291-db12a7440d81", 00:17:10.079 "is_configured": true, 00:17:10.079 "data_offset": 2048, 00:17:10.079 "data_size": 63488 00:17:10.079 } 00:17:10.079 ] 00:17:10.079 }' 00:17:10.079 21:13:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.079 21:13:32 -- common/autotest_common.sh@10 -- # set +x 00:17:10.643 21:13:33 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:10.643 21:13:33 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:10.900 [2024-06-07 21:13:33.559221] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:11.158 21:13:33 -- bdev/bdev_raid.sh@430 -- # '[' e77dc6c5-08f9-42d3-9ecb-d2c5ea90597d '!=' e77dc6c5-08f9-42d3-9ecb-d2c5ea90597d ']' 00:17:11.158 21:13:33 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:17:11.158 21:13:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:11.158 21:13:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:11.158 21:13:33 -- bdev/bdev_raid.sh@511 -- # killprocess 132675 00:17:11.158 21:13:33 -- common/autotest_common.sh@926 -- # '[' -z 132675 ']' 00:17:11.158 21:13:33 -- common/autotest_common.sh@930 -- # kill -0 132675 00:17:11.158 21:13:33 -- common/autotest_common.sh@931 -- # uname 00:17:11.158 21:13:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:11.158 21:13:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132675 00:17:11.158 killing process with pid 132675 00:17:11.158 21:13:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:11.158 21:13:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:11.158 21:13:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132675' 00:17:11.158 21:13:33 -- common/autotest_common.sh@945 -- # kill 132675 00:17:11.158 21:13:33 -- common/autotest_common.sh@950 -- # wait 132675 00:17:11.158 [2024-06-07 21:13:33.598026] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:11.158 [2024-06-07 21:13:33.598113] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.158 [2024-06-07 21:13:33.598216] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.158 [2024-06-07 21:13:33.598236] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:17:11.158 [2024-06-07 21:13:33.640148] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:11.416 ************************************ 00:17:11.416 END TEST raid_superblock_test 00:17:11.416 ************************************ 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:11.416 00:17:11.416 real 0m11.142s 00:17:11.416 user 0m20.533s 00:17:11.416 sys 0m1.302s 00:17:11.416 21:13:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:11.416 21:13:33 -- common/autotest_common.sh@10 -- # set +x 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:17:11.416 21:13:33 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:11.416 21:13:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:11.416 21:13:33 -- common/autotest_common.sh@10 -- # set +x 00:17:11.416 ************************************ 00:17:11.416 START TEST raid_state_function_test 00:17:11.416 ************************************ 00:17:11.416 21:13:33 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=133020 00:17:11.416 Process raid pid: 133020 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 133020' 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 133020 /var/tmp/spdk-raid.sock 00:17:11.416 21:13:33 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:11.416 21:13:33 -- common/autotest_common.sh@819 -- # '[' -z 133020 ']' 00:17:11.416 21:13:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:11.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:11.416 21:13:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:11.416 21:13:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:11.416 21:13:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:11.416 21:13:33 -- common/autotest_common.sh@10 -- # set +x 00:17:11.416 [2024-06-07 21:13:33.981716] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:11.417 [2024-06-07 21:13:33.981939] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.674 [2024-06-07 21:13:34.140064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.674 [2024-06-07 21:13:34.202545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.674 [2024-06-07 21:13:34.256065] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.239 21:13:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:12.239 21:13:34 -- common/autotest_common.sh@852 -- # return 0 00:17:12.239 21:13:34 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:12.497 [2024-06-07 21:13:35.074442] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:12.497 [2024-06-07 21:13:35.074537] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:12.497 [2024-06-07 21:13:35.074551] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:12.497 [2024-06-07 21:13:35.074573] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:12.497 [2024-06-07 21:13:35.074580] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:12.497 [2024-06-07 21:13:35.074618] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:12.497 [2024-06-07 21:13:35.074627] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:12.497 [2024-06-07 21:13:35.074649] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:12.497 21:13:35 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:12.497 21:13:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:12.497 21:13:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:12.497 21:13:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:12.497 21:13:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:12.497 21:13:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:12.497 21:13:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.497 21:13:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.497 21:13:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.497 21:13:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.497 21:13:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.497 21:13:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.754 21:13:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.754 "name": "Existed_Raid", 00:17:12.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.754 "strip_size_kb": 64, 00:17:12.754 "state": "configuring", 00:17:12.754 "raid_level": "concat", 00:17:12.754 "superblock": false, 00:17:12.754 "num_base_bdevs": 4, 00:17:12.754 "num_base_bdevs_discovered": 0, 00:17:12.754 "num_base_bdevs_operational": 4, 00:17:12.754 "base_bdevs_list": [ 00:17:12.754 { 00:17:12.754 "name": "BaseBdev1", 00:17:12.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.754 "is_configured": false, 00:17:12.755 "data_offset": 0, 00:17:12.755 "data_size": 0 00:17:12.755 }, 00:17:12.755 { 00:17:12.755 "name": "BaseBdev2", 00:17:12.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.755 "is_configured": false, 00:17:12.755 "data_offset": 0, 00:17:12.755 "data_size": 0 00:17:12.755 }, 00:17:12.755 { 00:17:12.755 "name": "BaseBdev3", 00:17:12.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.755 "is_configured": false, 00:17:12.755 "data_offset": 0, 00:17:12.755 "data_size": 0 00:17:12.755 }, 00:17:12.755 { 00:17:12.755 "name": "BaseBdev4", 00:17:12.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.755 "is_configured": false, 00:17:12.755 "data_offset": 0, 00:17:12.755 "data_size": 0 00:17:12.755 } 00:17:12.755 ] 00:17:12.755 }' 00:17:12.755 21:13:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.755 21:13:35 -- common/autotest_common.sh@10 -- # set +x 00:17:13.687 21:13:36 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:13.687 [2024-06-07 21:13:36.206512] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.687 [2024-06-07 21:13:36.206572] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:13.687 21:13:36 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:13.944 [2024-06-07 21:13:36.402567] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:13.944 [2024-06-07 21:13:36.402646] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:13.944 [2024-06-07 21:13:36.402673] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.944 [2024-06-07 21:13:36.402704] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.944 [2024-06-07 21:13:36.402712] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:13.944 [2024-06-07 21:13:36.402745] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:13.944 [2024-06-07 21:13:36.402753] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:13.944 [2024-06-07 21:13:36.402774] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:13.944 21:13:36 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:14.201 [2024-06-07 21:13:36.661488] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.201 BaseBdev1 00:17:14.201 21:13:36 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:14.201 21:13:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:14.201 21:13:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:14.201 21:13:36 -- common/autotest_common.sh@889 -- # local i 00:17:14.201 21:13:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:14.201 21:13:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:14.201 21:13:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:14.458 21:13:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:14.458 [ 00:17:14.458 { 00:17:14.458 "name": "BaseBdev1", 00:17:14.458 "aliases": [ 00:17:14.458 "e8dedeb9-d1bc-468c-b24e-335711ff1975" 00:17:14.458 ], 00:17:14.458 "product_name": "Malloc disk", 00:17:14.458 "block_size": 512, 00:17:14.458 "num_blocks": 65536, 00:17:14.458 "uuid": "e8dedeb9-d1bc-468c-b24e-335711ff1975", 00:17:14.458 "assigned_rate_limits": { 00:17:14.458 "rw_ios_per_sec": 0, 00:17:14.458 "rw_mbytes_per_sec": 0, 00:17:14.458 "r_mbytes_per_sec": 0, 00:17:14.458 "w_mbytes_per_sec": 0 00:17:14.458 }, 00:17:14.458 "claimed": true, 00:17:14.458 "claim_type": "exclusive_write", 00:17:14.458 "zoned": false, 00:17:14.458 "supported_io_types": { 00:17:14.458 "read": true, 00:17:14.458 "write": true, 00:17:14.458 "unmap": true, 00:17:14.458 "write_zeroes": true, 00:17:14.458 "flush": true, 00:17:14.458 "reset": true, 00:17:14.458 "compare": false, 00:17:14.458 "compare_and_write": false, 00:17:14.458 "abort": true, 00:17:14.458 "nvme_admin": false, 00:17:14.458 "nvme_io": false 00:17:14.458 }, 00:17:14.458 "memory_domains": [ 00:17:14.458 { 00:17:14.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.458 "dma_device_type": 2 00:17:14.459 } 00:17:14.459 ], 00:17:14.459 "driver_specific": {} 00:17:14.459 } 00:17:14.459 ] 00:17:14.459 21:13:37 -- common/autotest_common.sh@895 -- # return 0 00:17:14.459 21:13:37 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:14.459 21:13:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.459 21:13:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:14.459 21:13:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:14.459 21:13:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:14.459 21:13:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:14.459 21:13:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.459 21:13:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.459 21:13:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.459 21:13:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.459 21:13:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.459 21:13:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.716 21:13:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.716 "name": "Existed_Raid", 00:17:14.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.716 "strip_size_kb": 64, 00:17:14.716 "state": "configuring", 00:17:14.716 "raid_level": "concat", 00:17:14.716 "superblock": false, 00:17:14.716 "num_base_bdevs": 4, 00:17:14.716 "num_base_bdevs_discovered": 1, 00:17:14.716 "num_base_bdevs_operational": 4, 00:17:14.716 "base_bdevs_list": [ 00:17:14.716 { 00:17:14.716 "name": "BaseBdev1", 00:17:14.716 "uuid": "e8dedeb9-d1bc-468c-b24e-335711ff1975", 00:17:14.716 "is_configured": true, 00:17:14.716 "data_offset": 0, 00:17:14.716 "data_size": 65536 00:17:14.716 }, 00:17:14.716 { 00:17:14.716 "name": "BaseBdev2", 00:17:14.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.716 "is_configured": false, 00:17:14.716 "data_offset": 0, 00:17:14.716 "data_size": 0 00:17:14.716 }, 00:17:14.716 { 00:17:14.716 "name": "BaseBdev3", 00:17:14.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.716 "is_configured": false, 00:17:14.716 "data_offset": 0, 00:17:14.716 "data_size": 0 00:17:14.716 }, 00:17:14.716 { 00:17:14.716 "name": "BaseBdev4", 00:17:14.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.716 "is_configured": false, 00:17:14.716 "data_offset": 0, 00:17:14.716 "data_size": 0 00:17:14.716 } 00:17:14.716 ] 00:17:14.716 }' 00:17:14.716 21:13:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.716 21:13:37 -- common/autotest_common.sh@10 -- # set +x 00:17:15.649 21:13:38 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:15.649 [2024-06-07 21:13:38.285903] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.649 [2024-06-07 21:13:38.285990] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:15.649 21:13:38 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:15.649 21:13:38 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:15.907 [2024-06-07 21:13:38.558070] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:15.907 [2024-06-07 21:13:38.560667] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:15.907 [2024-06-07 21:13:38.560808] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:15.907 [2024-06-07 21:13:38.560838] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:15.907 [2024-06-07 21:13:38.560879] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:15.907 [2024-06-07 21:13:38.560889] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:15.907 [2024-06-07 21:13:38.560919] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:15.907 21:13:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:15.907 21:13:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:15.907 21:13:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:15.907 21:13:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:15.907 21:13:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:15.907 21:13:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:15.907 21:13:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:15.907 21:13:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:15.908 21:13:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:15.908 21:13:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:15.908 21:13:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:15.908 21:13:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:15.908 21:13:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.908 21:13:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.166 21:13:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:16.166 "name": "Existed_Raid", 00:17:16.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.166 "strip_size_kb": 64, 00:17:16.166 "state": "configuring", 00:17:16.166 "raid_level": "concat", 00:17:16.166 "superblock": false, 00:17:16.166 "num_base_bdevs": 4, 00:17:16.166 "num_base_bdevs_discovered": 1, 00:17:16.166 "num_base_bdevs_operational": 4, 00:17:16.166 "base_bdevs_list": [ 00:17:16.166 { 00:17:16.166 "name": "BaseBdev1", 00:17:16.166 "uuid": "e8dedeb9-d1bc-468c-b24e-335711ff1975", 00:17:16.166 "is_configured": true, 00:17:16.166 "data_offset": 0, 00:17:16.166 "data_size": 65536 00:17:16.166 }, 00:17:16.166 { 00:17:16.166 "name": "BaseBdev2", 00:17:16.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.166 "is_configured": false, 00:17:16.166 "data_offset": 0, 00:17:16.166 "data_size": 0 00:17:16.166 }, 00:17:16.166 { 00:17:16.166 "name": "BaseBdev3", 00:17:16.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.166 "is_configured": false, 00:17:16.166 "data_offset": 0, 00:17:16.166 "data_size": 0 00:17:16.166 }, 00:17:16.166 { 00:17:16.166 "name": "BaseBdev4", 00:17:16.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.166 "is_configured": false, 00:17:16.166 "data_offset": 0, 00:17:16.166 "data_size": 0 00:17:16.166 } 00:17:16.166 ] 00:17:16.166 }' 00:17:16.166 21:13:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:16.166 21:13:38 -- common/autotest_common.sh@10 -- # set +x 00:17:17.099 21:13:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:17.099 [2024-06-07 21:13:39.664999] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:17.099 BaseBdev2 00:17:17.099 21:13:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:17.099 21:13:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:17.099 21:13:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:17.099 21:13:39 -- common/autotest_common.sh@889 -- # local i 00:17:17.099 21:13:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:17.099 21:13:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:17.099 21:13:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:17.357 21:13:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:17.615 [ 00:17:17.615 { 00:17:17.615 "name": "BaseBdev2", 00:17:17.615 "aliases": [ 00:17:17.615 "45082ccb-2f01-4e89-96cd-8dda715c5ba7" 00:17:17.615 ], 00:17:17.615 "product_name": "Malloc disk", 00:17:17.615 "block_size": 512, 00:17:17.615 "num_blocks": 65536, 00:17:17.615 "uuid": "45082ccb-2f01-4e89-96cd-8dda715c5ba7", 00:17:17.615 "assigned_rate_limits": { 00:17:17.615 "rw_ios_per_sec": 0, 00:17:17.615 "rw_mbytes_per_sec": 0, 00:17:17.615 "r_mbytes_per_sec": 0, 00:17:17.615 "w_mbytes_per_sec": 0 00:17:17.615 }, 00:17:17.615 "claimed": true, 00:17:17.615 "claim_type": "exclusive_write", 00:17:17.615 "zoned": false, 00:17:17.615 "supported_io_types": { 00:17:17.615 "read": true, 00:17:17.615 "write": true, 00:17:17.615 "unmap": true, 00:17:17.615 "write_zeroes": true, 00:17:17.615 "flush": true, 00:17:17.615 "reset": true, 00:17:17.615 "compare": false, 00:17:17.615 "compare_and_write": false, 00:17:17.615 "abort": true, 00:17:17.615 "nvme_admin": false, 00:17:17.615 "nvme_io": false 00:17:17.615 }, 00:17:17.615 "memory_domains": [ 00:17:17.615 { 00:17:17.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.615 "dma_device_type": 2 00:17:17.615 } 00:17:17.615 ], 00:17:17.615 "driver_specific": {} 00:17:17.615 } 00:17:17.615 ] 00:17:17.615 21:13:40 -- common/autotest_common.sh@895 -- # return 0 00:17:17.615 21:13:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:17.615 21:13:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:17.615 21:13:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:17.615 21:13:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:17.615 21:13:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:17.615 21:13:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:17.615 21:13:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:17.615 21:13:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:17.615 21:13:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:17.615 21:13:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:17.615 21:13:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:17.615 21:13:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:17.615 21:13:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.615 21:13:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.873 21:13:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:17.873 "name": "Existed_Raid", 00:17:17.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.873 "strip_size_kb": 64, 00:17:17.873 "state": "configuring", 00:17:17.873 "raid_level": "concat", 00:17:17.873 "superblock": false, 00:17:17.873 "num_base_bdevs": 4, 00:17:17.873 "num_base_bdevs_discovered": 2, 00:17:17.873 "num_base_bdevs_operational": 4, 00:17:17.873 "base_bdevs_list": [ 00:17:17.873 { 00:17:17.873 "name": "BaseBdev1", 00:17:17.873 "uuid": "e8dedeb9-d1bc-468c-b24e-335711ff1975", 00:17:17.873 "is_configured": true, 00:17:17.873 "data_offset": 0, 00:17:17.873 "data_size": 65536 00:17:17.873 }, 00:17:17.873 { 00:17:17.873 "name": "BaseBdev2", 00:17:17.873 "uuid": "45082ccb-2f01-4e89-96cd-8dda715c5ba7", 00:17:17.873 "is_configured": true, 00:17:17.873 "data_offset": 0, 00:17:17.873 "data_size": 65536 00:17:17.873 }, 00:17:17.873 { 00:17:17.873 "name": "BaseBdev3", 00:17:17.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.873 "is_configured": false, 00:17:17.873 "data_offset": 0, 00:17:17.873 "data_size": 0 00:17:17.873 }, 00:17:17.873 { 00:17:17.873 "name": "BaseBdev4", 00:17:17.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.873 "is_configured": false, 00:17:17.873 "data_offset": 0, 00:17:17.873 "data_size": 0 00:17:17.873 } 00:17:17.873 ] 00:17:17.873 }' 00:17:17.873 21:13:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:17.873 21:13:40 -- common/autotest_common.sh@10 -- # set +x 00:17:18.438 21:13:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:18.696 [2024-06-07 21:13:41.198887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:18.696 BaseBdev3 00:17:18.696 21:13:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:18.696 21:13:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:18.696 21:13:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:18.696 21:13:41 -- common/autotest_common.sh@889 -- # local i 00:17:18.696 21:13:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:18.696 21:13:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:18.696 21:13:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:18.953 21:13:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:19.212 [ 00:17:19.212 { 00:17:19.212 "name": "BaseBdev3", 00:17:19.212 "aliases": [ 00:17:19.212 "df75b5b1-acb5-411d-b48c-319565ca0348" 00:17:19.212 ], 00:17:19.212 "product_name": "Malloc disk", 00:17:19.212 "block_size": 512, 00:17:19.212 "num_blocks": 65536, 00:17:19.212 "uuid": "df75b5b1-acb5-411d-b48c-319565ca0348", 00:17:19.212 "assigned_rate_limits": { 00:17:19.212 "rw_ios_per_sec": 0, 00:17:19.212 "rw_mbytes_per_sec": 0, 00:17:19.212 "r_mbytes_per_sec": 0, 00:17:19.212 "w_mbytes_per_sec": 0 00:17:19.212 }, 00:17:19.212 "claimed": true, 00:17:19.212 "claim_type": "exclusive_write", 00:17:19.212 "zoned": false, 00:17:19.212 "supported_io_types": { 00:17:19.212 "read": true, 00:17:19.212 "write": true, 00:17:19.212 "unmap": true, 00:17:19.212 "write_zeroes": true, 00:17:19.212 "flush": true, 00:17:19.212 "reset": true, 00:17:19.212 "compare": false, 00:17:19.212 "compare_and_write": false, 00:17:19.212 "abort": true, 00:17:19.212 "nvme_admin": false, 00:17:19.212 "nvme_io": false 00:17:19.212 }, 00:17:19.212 "memory_domains": [ 00:17:19.212 { 00:17:19.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.212 "dma_device_type": 2 00:17:19.212 } 00:17:19.212 ], 00:17:19.212 "driver_specific": {} 00:17:19.212 } 00:17:19.212 ] 00:17:19.212 21:13:41 -- common/autotest_common.sh@895 -- # return 0 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.212 "name": "Existed_Raid", 00:17:19.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.212 "strip_size_kb": 64, 00:17:19.212 "state": "configuring", 00:17:19.212 "raid_level": "concat", 00:17:19.212 "superblock": false, 00:17:19.212 "num_base_bdevs": 4, 00:17:19.212 "num_base_bdevs_discovered": 3, 00:17:19.212 "num_base_bdevs_operational": 4, 00:17:19.212 "base_bdevs_list": [ 00:17:19.212 { 00:17:19.212 "name": "BaseBdev1", 00:17:19.212 "uuid": "e8dedeb9-d1bc-468c-b24e-335711ff1975", 00:17:19.212 "is_configured": true, 00:17:19.212 "data_offset": 0, 00:17:19.212 "data_size": 65536 00:17:19.212 }, 00:17:19.212 { 00:17:19.212 "name": "BaseBdev2", 00:17:19.212 "uuid": "45082ccb-2f01-4e89-96cd-8dda715c5ba7", 00:17:19.212 "is_configured": true, 00:17:19.212 "data_offset": 0, 00:17:19.212 "data_size": 65536 00:17:19.212 }, 00:17:19.212 { 00:17:19.212 "name": "BaseBdev3", 00:17:19.212 "uuid": "df75b5b1-acb5-411d-b48c-319565ca0348", 00:17:19.212 "is_configured": true, 00:17:19.212 "data_offset": 0, 00:17:19.212 "data_size": 65536 00:17:19.212 }, 00:17:19.212 { 00:17:19.212 "name": "BaseBdev4", 00:17:19.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.212 "is_configured": false, 00:17:19.212 "data_offset": 0, 00:17:19.212 "data_size": 0 00:17:19.212 } 00:17:19.212 ] 00:17:19.212 }' 00:17:19.212 21:13:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.212 21:13:41 -- common/autotest_common.sh@10 -- # set +x 00:17:20.146 21:13:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:20.146 [2024-06-07 21:13:42.820794] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:20.146 [2024-06-07 21:13:42.820887] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:20.146 [2024-06-07 21:13:42.820900] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:20.404 [2024-06-07 21:13:42.821107] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:20.404 [2024-06-07 21:13:42.821554] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:20.404 [2024-06-07 21:13:42.821568] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:20.404 [2024-06-07 21:13:42.821800] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.404 BaseBdev4 00:17:20.404 21:13:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:20.404 21:13:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:20.404 21:13:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:20.404 21:13:42 -- common/autotest_common.sh@889 -- # local i 00:17:20.404 21:13:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:20.404 21:13:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:20.404 21:13:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:20.404 21:13:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:20.662 [ 00:17:20.662 { 00:17:20.662 "name": "BaseBdev4", 00:17:20.662 "aliases": [ 00:17:20.662 "ca3b221d-6c64-4039-93a0-f9f80213190b" 00:17:20.662 ], 00:17:20.662 "product_name": "Malloc disk", 00:17:20.662 "block_size": 512, 00:17:20.662 "num_blocks": 65536, 00:17:20.662 "uuid": "ca3b221d-6c64-4039-93a0-f9f80213190b", 00:17:20.662 "assigned_rate_limits": { 00:17:20.662 "rw_ios_per_sec": 0, 00:17:20.662 "rw_mbytes_per_sec": 0, 00:17:20.662 "r_mbytes_per_sec": 0, 00:17:20.662 "w_mbytes_per_sec": 0 00:17:20.662 }, 00:17:20.662 "claimed": true, 00:17:20.662 "claim_type": "exclusive_write", 00:17:20.662 "zoned": false, 00:17:20.662 "supported_io_types": { 00:17:20.662 "read": true, 00:17:20.662 "write": true, 00:17:20.662 "unmap": true, 00:17:20.662 "write_zeroes": true, 00:17:20.662 "flush": true, 00:17:20.662 "reset": true, 00:17:20.662 "compare": false, 00:17:20.662 "compare_and_write": false, 00:17:20.662 "abort": true, 00:17:20.662 "nvme_admin": false, 00:17:20.662 "nvme_io": false 00:17:20.662 }, 00:17:20.662 "memory_domains": [ 00:17:20.662 { 00:17:20.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.662 "dma_device_type": 2 00:17:20.662 } 00:17:20.662 ], 00:17:20.662 "driver_specific": {} 00:17:20.662 } 00:17:20.662 ] 00:17:20.662 21:13:43 -- common/autotest_common.sh@895 -- # return 0 00:17:20.662 21:13:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:20.662 21:13:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:20.662 21:13:43 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:20.662 21:13:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:20.662 21:13:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:20.662 21:13:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:20.662 21:13:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:20.662 21:13:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:20.662 21:13:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:20.662 21:13:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:20.662 21:13:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:20.662 21:13:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:20.662 21:13:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.662 21:13:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.921 21:13:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:20.921 "name": "Existed_Raid", 00:17:20.921 "uuid": "5d03d593-0385-468f-b24c-413260739211", 00:17:20.921 "strip_size_kb": 64, 00:17:20.921 "state": "online", 00:17:20.921 "raid_level": "concat", 00:17:20.921 "superblock": false, 00:17:20.921 "num_base_bdevs": 4, 00:17:20.921 "num_base_bdevs_discovered": 4, 00:17:20.921 "num_base_bdevs_operational": 4, 00:17:20.921 "base_bdevs_list": [ 00:17:20.921 { 00:17:20.921 "name": "BaseBdev1", 00:17:20.921 "uuid": "e8dedeb9-d1bc-468c-b24e-335711ff1975", 00:17:20.921 "is_configured": true, 00:17:20.921 "data_offset": 0, 00:17:20.921 "data_size": 65536 00:17:20.921 }, 00:17:20.921 { 00:17:20.921 "name": "BaseBdev2", 00:17:20.921 "uuid": "45082ccb-2f01-4e89-96cd-8dda715c5ba7", 00:17:20.921 "is_configured": true, 00:17:20.921 "data_offset": 0, 00:17:20.921 "data_size": 65536 00:17:20.921 }, 00:17:20.921 { 00:17:20.921 "name": "BaseBdev3", 00:17:20.921 "uuid": "df75b5b1-acb5-411d-b48c-319565ca0348", 00:17:20.921 "is_configured": true, 00:17:20.921 "data_offset": 0, 00:17:20.921 "data_size": 65536 00:17:20.921 }, 00:17:20.921 { 00:17:20.921 "name": "BaseBdev4", 00:17:20.921 "uuid": "ca3b221d-6c64-4039-93a0-f9f80213190b", 00:17:20.921 "is_configured": true, 00:17:20.921 "data_offset": 0, 00:17:20.921 "data_size": 65536 00:17:20.921 } 00:17:20.921 ] 00:17:20.921 }' 00:17:20.921 21:13:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:20.921 21:13:43 -- common/autotest_common.sh@10 -- # set +x 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:21.856 [2024-06-07 21:13:44.469381] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:21.856 [2024-06-07 21:13:44.469425] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.856 [2024-06-07 21:13:44.469530] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.856 21:13:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.114 21:13:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:22.114 "name": "Existed_Raid", 00:17:22.114 "uuid": "5d03d593-0385-468f-b24c-413260739211", 00:17:22.114 "strip_size_kb": 64, 00:17:22.114 "state": "offline", 00:17:22.114 "raid_level": "concat", 00:17:22.114 "superblock": false, 00:17:22.114 "num_base_bdevs": 4, 00:17:22.114 "num_base_bdevs_discovered": 3, 00:17:22.114 "num_base_bdevs_operational": 3, 00:17:22.114 "base_bdevs_list": [ 00:17:22.114 { 00:17:22.114 "name": null, 00:17:22.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.114 "is_configured": false, 00:17:22.114 "data_offset": 0, 00:17:22.114 "data_size": 65536 00:17:22.114 }, 00:17:22.114 { 00:17:22.114 "name": "BaseBdev2", 00:17:22.114 "uuid": "45082ccb-2f01-4e89-96cd-8dda715c5ba7", 00:17:22.114 "is_configured": true, 00:17:22.114 "data_offset": 0, 00:17:22.114 "data_size": 65536 00:17:22.114 }, 00:17:22.114 { 00:17:22.114 "name": "BaseBdev3", 00:17:22.114 "uuid": "df75b5b1-acb5-411d-b48c-319565ca0348", 00:17:22.114 "is_configured": true, 00:17:22.114 "data_offset": 0, 00:17:22.114 "data_size": 65536 00:17:22.114 }, 00:17:22.114 { 00:17:22.114 "name": "BaseBdev4", 00:17:22.114 "uuid": "ca3b221d-6c64-4039-93a0-f9f80213190b", 00:17:22.114 "is_configured": true, 00:17:22.114 "data_offset": 0, 00:17:22.114 "data_size": 65536 00:17:22.114 } 00:17:22.114 ] 00:17:22.114 }' 00:17:22.114 21:13:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:22.114 21:13:44 -- common/autotest_common.sh@10 -- # set +x 00:17:23.102 21:13:45 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:23.102 21:13:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:23.102 21:13:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.102 21:13:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:23.102 21:13:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:23.102 21:13:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:23.102 21:13:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:23.360 [2024-06-07 21:13:45.864006] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:23.360 21:13:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:23.360 21:13:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:23.360 21:13:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.360 21:13:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:23.617 21:13:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:23.617 21:13:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:23.617 21:13:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:23.876 [2024-06-07 21:13:46.333900] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:23.876 21:13:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:23.876 21:13:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:23.876 21:13:46 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.876 21:13:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:24.134 21:13:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:24.134 21:13:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:24.134 21:13:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:24.134 [2024-06-07 21:13:46.740617] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:24.134 [2024-06-07 21:13:46.740715] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:24.134 21:13:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:24.134 21:13:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:24.134 21:13:46 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.134 21:13:46 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:24.393 21:13:46 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:24.393 21:13:46 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:24.393 21:13:46 -- bdev/bdev_raid.sh@287 -- # killprocess 133020 00:17:24.393 21:13:46 -- common/autotest_common.sh@926 -- # '[' -z 133020 ']' 00:17:24.393 21:13:46 -- common/autotest_common.sh@930 -- # kill -0 133020 00:17:24.393 21:13:46 -- common/autotest_common.sh@931 -- # uname 00:17:24.393 21:13:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:24.393 21:13:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133020 00:17:24.393 killing process with pid 133020 00:17:24.393 21:13:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:24.393 21:13:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:24.393 21:13:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133020' 00:17:24.393 21:13:46 -- common/autotest_common.sh@945 -- # kill 133020 00:17:24.393 21:13:46 -- common/autotest_common.sh@950 -- # wait 133020 00:17:24.393 [2024-06-07 21:13:46.991059] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:24.393 [2024-06-07 21:13:46.991147] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:24.651 ************************************ 00:17:24.651 END TEST raid_state_function_test 00:17:24.651 ************************************ 00:17:24.651 21:13:47 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:24.651 00:17:24.651 real 0m13.310s 00:17:24.651 user 0m24.918s 00:17:24.651 sys 0m1.556s 00:17:24.651 21:13:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.651 21:13:47 -- common/autotest_common.sh@10 -- # set +x 00:17:24.651 21:13:47 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:17:24.652 21:13:47 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:24.652 21:13:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:24.652 21:13:47 -- common/autotest_common.sh@10 -- # set +x 00:17:24.652 ************************************ 00:17:24.652 START TEST raid_state_function_test_sb 00:17:24.652 ************************************ 00:17:24.652 21:13:47 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@226 -- # raid_pid=133463 00:17:24.652 Process raid pid: 133463 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 133463' 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:24.652 21:13:47 -- bdev/bdev_raid.sh@228 -- # waitforlisten 133463 /var/tmp/spdk-raid.sock 00:17:24.652 21:13:47 -- common/autotest_common.sh@819 -- # '[' -z 133463 ']' 00:17:24.652 21:13:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:24.652 21:13:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:24.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:24.652 21:13:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:24.652 21:13:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:24.652 21:13:47 -- common/autotest_common.sh@10 -- # set +x 00:17:24.910 [2024-06-07 21:13:47.345727] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:24.910 [2024-06-07 21:13:47.345923] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.910 [2024-06-07 21:13:47.505623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.166 [2024-06-07 21:13:47.601661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.166 [2024-06-07 21:13:47.659027] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.731 21:13:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:25.732 21:13:48 -- common/autotest_common.sh@852 -- # return 0 00:17:25.732 21:13:48 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:25.990 [2024-06-07 21:13:48.529893] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:25.990 [2024-06-07 21:13:48.530207] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:25.990 [2024-06-07 21:13:48.530308] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.990 [2024-06-07 21:13:48.530369] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.990 [2024-06-07 21:13:48.530457] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:25.990 [2024-06-07 21:13:48.530532] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:25.990 [2024-06-07 21:13:48.530564] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:25.990 [2024-06-07 21:13:48.530665] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:25.990 21:13:48 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:25.990 21:13:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.990 21:13:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:25.990 21:13:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:25.990 21:13:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:25.990 21:13:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:25.990 21:13:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.990 21:13:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.990 21:13:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.990 21:13:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.990 21:13:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.990 21:13:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.248 21:13:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.248 "name": "Existed_Raid", 00:17:26.248 "uuid": "01487113-bf88-4b28-90b1-94a49553ca4c", 00:17:26.248 "strip_size_kb": 64, 00:17:26.248 "state": "configuring", 00:17:26.248 "raid_level": "concat", 00:17:26.248 "superblock": true, 00:17:26.248 "num_base_bdevs": 4, 00:17:26.248 "num_base_bdevs_discovered": 0, 00:17:26.248 "num_base_bdevs_operational": 4, 00:17:26.248 "base_bdevs_list": [ 00:17:26.248 { 00:17:26.248 "name": "BaseBdev1", 00:17:26.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.248 "is_configured": false, 00:17:26.248 "data_offset": 0, 00:17:26.248 "data_size": 0 00:17:26.248 }, 00:17:26.248 { 00:17:26.248 "name": "BaseBdev2", 00:17:26.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.248 "is_configured": false, 00:17:26.248 "data_offset": 0, 00:17:26.248 "data_size": 0 00:17:26.248 }, 00:17:26.248 { 00:17:26.248 "name": "BaseBdev3", 00:17:26.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.249 "is_configured": false, 00:17:26.249 "data_offset": 0, 00:17:26.249 "data_size": 0 00:17:26.249 }, 00:17:26.249 { 00:17:26.249 "name": "BaseBdev4", 00:17:26.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.249 "is_configured": false, 00:17:26.249 "data_offset": 0, 00:17:26.249 "data_size": 0 00:17:26.249 } 00:17:26.249 ] 00:17:26.249 }' 00:17:26.249 21:13:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.249 21:13:48 -- common/autotest_common.sh@10 -- # set +x 00:17:26.815 21:13:49 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:27.073 [2024-06-07 21:13:49.673945] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:27.073 [2024-06-07 21:13:49.674212] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:27.073 21:13:49 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:27.331 [2024-06-07 21:13:49.886035] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:27.331 [2024-06-07 21:13:49.886279] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:27.331 [2024-06-07 21:13:49.886405] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:27.331 [2024-06-07 21:13:49.886475] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:27.331 [2024-06-07 21:13:49.886596] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:27.331 [2024-06-07 21:13:49.886673] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:27.331 [2024-06-07 21:13:49.886797] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:27.331 [2024-06-07 21:13:49.886939] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:27.331 21:13:49 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:27.589 [2024-06-07 21:13:50.113070] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.589 BaseBdev1 00:17:27.589 21:13:50 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:27.589 21:13:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:27.589 21:13:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:27.589 21:13:50 -- common/autotest_common.sh@889 -- # local i 00:17:27.589 21:13:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:27.589 21:13:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:27.589 21:13:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.847 21:13:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:28.105 [ 00:17:28.105 { 00:17:28.105 "name": "BaseBdev1", 00:17:28.105 "aliases": [ 00:17:28.105 "1fe9abe3-b6d4-41b9-8f1e-695bc31cd4ec" 00:17:28.105 ], 00:17:28.105 "product_name": "Malloc disk", 00:17:28.105 "block_size": 512, 00:17:28.105 "num_blocks": 65536, 00:17:28.105 "uuid": "1fe9abe3-b6d4-41b9-8f1e-695bc31cd4ec", 00:17:28.105 "assigned_rate_limits": { 00:17:28.105 "rw_ios_per_sec": 0, 00:17:28.105 "rw_mbytes_per_sec": 0, 00:17:28.105 "r_mbytes_per_sec": 0, 00:17:28.105 "w_mbytes_per_sec": 0 00:17:28.105 }, 00:17:28.105 "claimed": true, 00:17:28.105 "claim_type": "exclusive_write", 00:17:28.105 "zoned": false, 00:17:28.105 "supported_io_types": { 00:17:28.105 "read": true, 00:17:28.105 "write": true, 00:17:28.105 "unmap": true, 00:17:28.105 "write_zeroes": true, 00:17:28.105 "flush": true, 00:17:28.105 "reset": true, 00:17:28.105 "compare": false, 00:17:28.105 "compare_and_write": false, 00:17:28.105 "abort": true, 00:17:28.105 "nvme_admin": false, 00:17:28.105 "nvme_io": false 00:17:28.105 }, 00:17:28.105 "memory_domains": [ 00:17:28.105 { 00:17:28.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.105 "dma_device_type": 2 00:17:28.105 } 00:17:28.105 ], 00:17:28.105 "driver_specific": {} 00:17:28.105 } 00:17:28.105 ] 00:17:28.105 21:13:50 -- common/autotest_common.sh@895 -- # return 0 00:17:28.105 21:13:50 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:28.105 21:13:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:28.105 21:13:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:28.105 21:13:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:28.105 21:13:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:28.105 21:13:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:28.105 21:13:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:28.105 21:13:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:28.105 21:13:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:28.105 21:13:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:28.105 21:13:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.105 21:13:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.105 21:13:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.105 "name": "Existed_Raid", 00:17:28.105 "uuid": "df215b09-7119-4300-b143-fa00357bc52f", 00:17:28.105 "strip_size_kb": 64, 00:17:28.105 "state": "configuring", 00:17:28.105 "raid_level": "concat", 00:17:28.105 "superblock": true, 00:17:28.105 "num_base_bdevs": 4, 00:17:28.105 "num_base_bdevs_discovered": 1, 00:17:28.105 "num_base_bdevs_operational": 4, 00:17:28.105 "base_bdevs_list": [ 00:17:28.105 { 00:17:28.105 "name": "BaseBdev1", 00:17:28.105 "uuid": "1fe9abe3-b6d4-41b9-8f1e-695bc31cd4ec", 00:17:28.106 "is_configured": true, 00:17:28.106 "data_offset": 2048, 00:17:28.106 "data_size": 63488 00:17:28.106 }, 00:17:28.106 { 00:17:28.106 "name": "BaseBdev2", 00:17:28.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.106 "is_configured": false, 00:17:28.106 "data_offset": 0, 00:17:28.106 "data_size": 0 00:17:28.106 }, 00:17:28.106 { 00:17:28.106 "name": "BaseBdev3", 00:17:28.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.106 "is_configured": false, 00:17:28.106 "data_offset": 0, 00:17:28.106 "data_size": 0 00:17:28.106 }, 00:17:28.106 { 00:17:28.106 "name": "BaseBdev4", 00:17:28.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.106 "is_configured": false, 00:17:28.106 "data_offset": 0, 00:17:28.106 "data_size": 0 00:17:28.106 } 00:17:28.106 ] 00:17:28.106 }' 00:17:28.106 21:13:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.106 21:13:50 -- common/autotest_common.sh@10 -- # set +x 00:17:29.042 21:13:51 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:29.042 [2024-06-07 21:13:51.601544] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:29.042 [2024-06-07 21:13:51.601797] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:29.042 21:13:51 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:29.042 21:13:51 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:29.299 21:13:51 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:29.558 BaseBdev1 00:17:29.558 21:13:52 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:29.558 21:13:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:29.558 21:13:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:29.558 21:13:52 -- common/autotest_common.sh@889 -- # local i 00:17:29.558 21:13:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:29.558 21:13:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:29.558 21:13:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:29.817 21:13:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:30.076 [ 00:17:30.076 { 00:17:30.076 "name": "BaseBdev1", 00:17:30.076 "aliases": [ 00:17:30.076 "406ed647-c9ed-479d-9110-ba05851fba4f" 00:17:30.076 ], 00:17:30.076 "product_name": "Malloc disk", 00:17:30.076 "block_size": 512, 00:17:30.076 "num_blocks": 65536, 00:17:30.076 "uuid": "406ed647-c9ed-479d-9110-ba05851fba4f", 00:17:30.076 "assigned_rate_limits": { 00:17:30.076 "rw_ios_per_sec": 0, 00:17:30.076 "rw_mbytes_per_sec": 0, 00:17:30.076 "r_mbytes_per_sec": 0, 00:17:30.076 "w_mbytes_per_sec": 0 00:17:30.076 }, 00:17:30.076 "claimed": false, 00:17:30.076 "zoned": false, 00:17:30.076 "supported_io_types": { 00:17:30.076 "read": true, 00:17:30.076 "write": true, 00:17:30.076 "unmap": true, 00:17:30.076 "write_zeroes": true, 00:17:30.076 "flush": true, 00:17:30.076 "reset": true, 00:17:30.076 "compare": false, 00:17:30.076 "compare_and_write": false, 00:17:30.076 "abort": true, 00:17:30.076 "nvme_admin": false, 00:17:30.076 "nvme_io": false 00:17:30.076 }, 00:17:30.076 "memory_domains": [ 00:17:30.076 { 00:17:30.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.076 "dma_device_type": 2 00:17:30.076 } 00:17:30.076 ], 00:17:30.076 "driver_specific": {} 00:17:30.076 } 00:17:30.076 ] 00:17:30.076 21:13:52 -- common/autotest_common.sh@895 -- # return 0 00:17:30.076 21:13:52 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:30.077 [2024-06-07 21:13:52.701772] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.077 [2024-06-07 21:13:52.703872] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.077 [2024-06-07 21:13:52.704089] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.077 [2024-06-07 21:13:52.704227] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:30.077 [2024-06-07 21:13:52.704357] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:30.077 [2024-06-07 21:13:52.704454] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:30.077 [2024-06-07 21:13:52.704508] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:30.077 21:13:52 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:30.077 21:13:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:30.077 21:13:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:30.077 21:13:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:30.077 21:13:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:30.077 21:13:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:30.077 21:13:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:30.077 21:13:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:30.077 21:13:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:30.077 21:13:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:30.077 21:13:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:30.077 21:13:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:30.077 21:13:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.077 21:13:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.336 21:13:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:30.336 "name": "Existed_Raid", 00:17:30.336 "uuid": "07715ee0-cefa-4614-8084-0bd78e914f16", 00:17:30.336 "strip_size_kb": 64, 00:17:30.336 "state": "configuring", 00:17:30.336 "raid_level": "concat", 00:17:30.336 "superblock": true, 00:17:30.336 "num_base_bdevs": 4, 00:17:30.336 "num_base_bdevs_discovered": 1, 00:17:30.336 "num_base_bdevs_operational": 4, 00:17:30.336 "base_bdevs_list": [ 00:17:30.336 { 00:17:30.336 "name": "BaseBdev1", 00:17:30.336 "uuid": "406ed647-c9ed-479d-9110-ba05851fba4f", 00:17:30.336 "is_configured": true, 00:17:30.336 "data_offset": 2048, 00:17:30.336 "data_size": 63488 00:17:30.336 }, 00:17:30.336 { 00:17:30.336 "name": "BaseBdev2", 00:17:30.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.336 "is_configured": false, 00:17:30.336 "data_offset": 0, 00:17:30.336 "data_size": 0 00:17:30.336 }, 00:17:30.336 { 00:17:30.336 "name": "BaseBdev3", 00:17:30.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.336 "is_configured": false, 00:17:30.336 "data_offset": 0, 00:17:30.336 "data_size": 0 00:17:30.336 }, 00:17:30.336 { 00:17:30.336 "name": "BaseBdev4", 00:17:30.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.336 "is_configured": false, 00:17:30.336 "data_offset": 0, 00:17:30.336 "data_size": 0 00:17:30.336 } 00:17:30.336 ] 00:17:30.336 }' 00:17:30.336 21:13:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:30.336 21:13:52 -- common/autotest_common.sh@10 -- # set +x 00:17:30.905 21:13:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:31.164 [2024-06-07 21:13:53.812825] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.164 BaseBdev2 00:17:31.164 21:13:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:31.164 21:13:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:31.164 21:13:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:31.164 21:13:53 -- common/autotest_common.sh@889 -- # local i 00:17:31.164 21:13:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:31.164 21:13:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:31.164 21:13:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:31.422 21:13:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:31.683 [ 00:17:31.683 { 00:17:31.683 "name": "BaseBdev2", 00:17:31.683 "aliases": [ 00:17:31.683 "95b144e0-d8c5-4511-b004-b2bf6d9d04a3" 00:17:31.683 ], 00:17:31.683 "product_name": "Malloc disk", 00:17:31.683 "block_size": 512, 00:17:31.683 "num_blocks": 65536, 00:17:31.683 "uuid": "95b144e0-d8c5-4511-b004-b2bf6d9d04a3", 00:17:31.683 "assigned_rate_limits": { 00:17:31.683 "rw_ios_per_sec": 0, 00:17:31.683 "rw_mbytes_per_sec": 0, 00:17:31.683 "r_mbytes_per_sec": 0, 00:17:31.683 "w_mbytes_per_sec": 0 00:17:31.683 }, 00:17:31.683 "claimed": true, 00:17:31.683 "claim_type": "exclusive_write", 00:17:31.683 "zoned": false, 00:17:31.683 "supported_io_types": { 00:17:31.683 "read": true, 00:17:31.683 "write": true, 00:17:31.683 "unmap": true, 00:17:31.683 "write_zeroes": true, 00:17:31.683 "flush": true, 00:17:31.683 "reset": true, 00:17:31.683 "compare": false, 00:17:31.683 "compare_and_write": false, 00:17:31.683 "abort": true, 00:17:31.683 "nvme_admin": false, 00:17:31.683 "nvme_io": false 00:17:31.683 }, 00:17:31.683 "memory_domains": [ 00:17:31.683 { 00:17:31.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.683 "dma_device_type": 2 00:17:31.683 } 00:17:31.683 ], 00:17:31.683 "driver_specific": {} 00:17:31.683 } 00:17:31.683 ] 00:17:31.683 21:13:54 -- common/autotest_common.sh@895 -- # return 0 00:17:31.683 21:13:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:31.683 21:13:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:31.683 21:13:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:31.683 21:13:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:31.683 21:13:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:31.683 21:13:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:31.683 21:13:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:31.683 21:13:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:31.683 21:13:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:31.683 21:13:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:31.683 21:13:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:31.683 21:13:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:31.683 21:13:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.683 21:13:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.942 21:13:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:31.942 "name": "Existed_Raid", 00:17:31.942 "uuid": "07715ee0-cefa-4614-8084-0bd78e914f16", 00:17:31.942 "strip_size_kb": 64, 00:17:31.942 "state": "configuring", 00:17:31.942 "raid_level": "concat", 00:17:31.942 "superblock": true, 00:17:31.942 "num_base_bdevs": 4, 00:17:31.942 "num_base_bdevs_discovered": 2, 00:17:31.942 "num_base_bdevs_operational": 4, 00:17:31.942 "base_bdevs_list": [ 00:17:31.942 { 00:17:31.942 "name": "BaseBdev1", 00:17:31.942 "uuid": "406ed647-c9ed-479d-9110-ba05851fba4f", 00:17:31.942 "is_configured": true, 00:17:31.942 "data_offset": 2048, 00:17:31.942 "data_size": 63488 00:17:31.942 }, 00:17:31.942 { 00:17:31.942 "name": "BaseBdev2", 00:17:31.942 "uuid": "95b144e0-d8c5-4511-b004-b2bf6d9d04a3", 00:17:31.942 "is_configured": true, 00:17:31.942 "data_offset": 2048, 00:17:31.942 "data_size": 63488 00:17:31.942 }, 00:17:31.942 { 00:17:31.942 "name": "BaseBdev3", 00:17:31.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.942 "is_configured": false, 00:17:31.942 "data_offset": 0, 00:17:31.942 "data_size": 0 00:17:31.942 }, 00:17:31.942 { 00:17:31.942 "name": "BaseBdev4", 00:17:31.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.942 "is_configured": false, 00:17:31.942 "data_offset": 0, 00:17:31.942 "data_size": 0 00:17:31.942 } 00:17:31.942 ] 00:17:31.942 }' 00:17:31.942 21:13:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:31.942 21:13:54 -- common/autotest_common.sh@10 -- # set +x 00:17:32.509 21:13:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:32.768 [2024-06-07 21:13:55.414236] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:32.768 BaseBdev3 00:17:32.768 21:13:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:32.768 21:13:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:32.768 21:13:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:32.768 21:13:55 -- common/autotest_common.sh@889 -- # local i 00:17:32.768 21:13:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:32.768 21:13:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:32.768 21:13:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:33.027 21:13:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:33.286 [ 00:17:33.286 { 00:17:33.286 "name": "BaseBdev3", 00:17:33.286 "aliases": [ 00:17:33.286 "532f32b6-09ac-4c1c-8950-abeb7be44609" 00:17:33.286 ], 00:17:33.286 "product_name": "Malloc disk", 00:17:33.286 "block_size": 512, 00:17:33.286 "num_blocks": 65536, 00:17:33.286 "uuid": "532f32b6-09ac-4c1c-8950-abeb7be44609", 00:17:33.286 "assigned_rate_limits": { 00:17:33.286 "rw_ios_per_sec": 0, 00:17:33.286 "rw_mbytes_per_sec": 0, 00:17:33.286 "r_mbytes_per_sec": 0, 00:17:33.286 "w_mbytes_per_sec": 0 00:17:33.286 }, 00:17:33.286 "claimed": true, 00:17:33.286 "claim_type": "exclusive_write", 00:17:33.286 "zoned": false, 00:17:33.286 "supported_io_types": { 00:17:33.286 "read": true, 00:17:33.286 "write": true, 00:17:33.286 "unmap": true, 00:17:33.286 "write_zeroes": true, 00:17:33.286 "flush": true, 00:17:33.286 "reset": true, 00:17:33.286 "compare": false, 00:17:33.286 "compare_and_write": false, 00:17:33.286 "abort": true, 00:17:33.286 "nvme_admin": false, 00:17:33.286 "nvme_io": false 00:17:33.286 }, 00:17:33.286 "memory_domains": [ 00:17:33.286 { 00:17:33.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.286 "dma_device_type": 2 00:17:33.286 } 00:17:33.286 ], 00:17:33.286 "driver_specific": {} 00:17:33.286 } 00:17:33.286 ] 00:17:33.286 21:13:55 -- common/autotest_common.sh@895 -- # return 0 00:17:33.286 21:13:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:33.286 21:13:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:33.286 21:13:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:33.286 21:13:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:33.286 21:13:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:33.286 21:13:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:33.286 21:13:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:33.286 21:13:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:33.286 21:13:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:33.286 21:13:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:33.286 21:13:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:33.286 21:13:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:33.286 21:13:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.286 21:13:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.545 21:13:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:33.545 "name": "Existed_Raid", 00:17:33.545 "uuid": "07715ee0-cefa-4614-8084-0bd78e914f16", 00:17:33.545 "strip_size_kb": 64, 00:17:33.545 "state": "configuring", 00:17:33.545 "raid_level": "concat", 00:17:33.545 "superblock": true, 00:17:33.545 "num_base_bdevs": 4, 00:17:33.545 "num_base_bdevs_discovered": 3, 00:17:33.545 "num_base_bdevs_operational": 4, 00:17:33.545 "base_bdevs_list": [ 00:17:33.545 { 00:17:33.545 "name": "BaseBdev1", 00:17:33.545 "uuid": "406ed647-c9ed-479d-9110-ba05851fba4f", 00:17:33.545 "is_configured": true, 00:17:33.545 "data_offset": 2048, 00:17:33.545 "data_size": 63488 00:17:33.545 }, 00:17:33.545 { 00:17:33.545 "name": "BaseBdev2", 00:17:33.545 "uuid": "95b144e0-d8c5-4511-b004-b2bf6d9d04a3", 00:17:33.545 "is_configured": true, 00:17:33.545 "data_offset": 2048, 00:17:33.545 "data_size": 63488 00:17:33.545 }, 00:17:33.545 { 00:17:33.545 "name": "BaseBdev3", 00:17:33.545 "uuid": "532f32b6-09ac-4c1c-8950-abeb7be44609", 00:17:33.545 "is_configured": true, 00:17:33.545 "data_offset": 2048, 00:17:33.545 "data_size": 63488 00:17:33.545 }, 00:17:33.545 { 00:17:33.545 "name": "BaseBdev4", 00:17:33.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.545 "is_configured": false, 00:17:33.545 "data_offset": 0, 00:17:33.545 "data_size": 0 00:17:33.545 } 00:17:33.545 ] 00:17:33.545 }' 00:17:33.545 21:13:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:33.545 21:13:56 -- common/autotest_common.sh@10 -- # set +x 00:17:34.112 21:13:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:34.371 [2024-06-07 21:13:57.019626] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:34.371 [2024-06-07 21:13:57.019898] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:34.371 [2024-06-07 21:13:57.019914] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:34.371 [2024-06-07 21:13:57.020110] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:34.371 BaseBdev4 00:17:34.371 [2024-06-07 21:13:57.020596] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:34.371 [2024-06-07 21:13:57.020622] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:17:34.371 [2024-06-07 21:13:57.020791] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.371 21:13:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:34.371 21:13:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:34.371 21:13:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:34.371 21:13:57 -- common/autotest_common.sh@889 -- # local i 00:17:34.371 21:13:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:34.371 21:13:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:34.371 21:13:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:34.629 21:13:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:34.889 [ 00:17:34.889 { 00:17:34.889 "name": "BaseBdev4", 00:17:34.889 "aliases": [ 00:17:34.889 "6bda9dea-b7c7-4d9f-b6c7-60919489d36a" 00:17:34.889 ], 00:17:34.889 "product_name": "Malloc disk", 00:17:34.889 "block_size": 512, 00:17:34.889 "num_blocks": 65536, 00:17:34.889 "uuid": "6bda9dea-b7c7-4d9f-b6c7-60919489d36a", 00:17:34.889 "assigned_rate_limits": { 00:17:34.889 "rw_ios_per_sec": 0, 00:17:34.889 "rw_mbytes_per_sec": 0, 00:17:34.889 "r_mbytes_per_sec": 0, 00:17:34.889 "w_mbytes_per_sec": 0 00:17:34.889 }, 00:17:34.889 "claimed": true, 00:17:34.889 "claim_type": "exclusive_write", 00:17:34.889 "zoned": false, 00:17:34.889 "supported_io_types": { 00:17:34.889 "read": true, 00:17:34.889 "write": true, 00:17:34.889 "unmap": true, 00:17:34.889 "write_zeroes": true, 00:17:34.889 "flush": true, 00:17:34.889 "reset": true, 00:17:34.889 "compare": false, 00:17:34.889 "compare_and_write": false, 00:17:34.889 "abort": true, 00:17:34.889 "nvme_admin": false, 00:17:34.889 "nvme_io": false 00:17:34.889 }, 00:17:34.889 "memory_domains": [ 00:17:34.889 { 00:17:34.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.889 "dma_device_type": 2 00:17:34.889 } 00:17:34.889 ], 00:17:34.889 "driver_specific": {} 00:17:34.889 } 00:17:34.889 ] 00:17:34.889 21:13:57 -- common/autotest_common.sh@895 -- # return 0 00:17:34.889 21:13:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:34.889 21:13:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:34.889 21:13:57 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:34.889 21:13:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:34.889 21:13:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:34.889 21:13:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:34.889 21:13:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:34.889 21:13:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:34.889 21:13:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:34.889 21:13:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:34.889 21:13:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:34.889 21:13:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:34.889 21:13:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.889 21:13:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.147 21:13:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.147 "name": "Existed_Raid", 00:17:35.147 "uuid": "07715ee0-cefa-4614-8084-0bd78e914f16", 00:17:35.147 "strip_size_kb": 64, 00:17:35.147 "state": "online", 00:17:35.147 "raid_level": "concat", 00:17:35.147 "superblock": true, 00:17:35.147 "num_base_bdevs": 4, 00:17:35.147 "num_base_bdevs_discovered": 4, 00:17:35.147 "num_base_bdevs_operational": 4, 00:17:35.147 "base_bdevs_list": [ 00:17:35.147 { 00:17:35.147 "name": "BaseBdev1", 00:17:35.147 "uuid": "406ed647-c9ed-479d-9110-ba05851fba4f", 00:17:35.147 "is_configured": true, 00:17:35.147 "data_offset": 2048, 00:17:35.147 "data_size": 63488 00:17:35.147 }, 00:17:35.147 { 00:17:35.147 "name": "BaseBdev2", 00:17:35.147 "uuid": "95b144e0-d8c5-4511-b004-b2bf6d9d04a3", 00:17:35.147 "is_configured": true, 00:17:35.147 "data_offset": 2048, 00:17:35.147 "data_size": 63488 00:17:35.147 }, 00:17:35.147 { 00:17:35.147 "name": "BaseBdev3", 00:17:35.147 "uuid": "532f32b6-09ac-4c1c-8950-abeb7be44609", 00:17:35.147 "is_configured": true, 00:17:35.147 "data_offset": 2048, 00:17:35.147 "data_size": 63488 00:17:35.147 }, 00:17:35.147 { 00:17:35.147 "name": "BaseBdev4", 00:17:35.147 "uuid": "6bda9dea-b7c7-4d9f-b6c7-60919489d36a", 00:17:35.147 "is_configured": true, 00:17:35.147 "data_offset": 2048, 00:17:35.147 "data_size": 63488 00:17:35.147 } 00:17:35.147 ] 00:17:35.147 }' 00:17:35.147 21:13:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.147 21:13:57 -- common/autotest_common.sh@10 -- # set +x 00:17:36.083 21:13:58 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:36.083 [2024-06-07 21:13:58.736207] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:36.083 [2024-06-07 21:13:58.736250] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.083 [2024-06-07 21:13:58.736350] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.342 21:13:58 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:36.342 21:13:58 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:36.342 21:13:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:36.342 21:13:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:36.342 21:13:58 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:36.342 21:13:58 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:36.343 21:13:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:36.343 21:13:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:36.343 21:13:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:36.343 21:13:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:36.343 21:13:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:36.343 21:13:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:36.343 21:13:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:36.343 21:13:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:36.343 21:13:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:36.343 21:13:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.343 21:13:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.602 21:13:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.602 "name": "Existed_Raid", 00:17:36.602 "uuid": "07715ee0-cefa-4614-8084-0bd78e914f16", 00:17:36.602 "strip_size_kb": 64, 00:17:36.602 "state": "offline", 00:17:36.602 "raid_level": "concat", 00:17:36.602 "superblock": true, 00:17:36.602 "num_base_bdevs": 4, 00:17:36.602 "num_base_bdevs_discovered": 3, 00:17:36.602 "num_base_bdevs_operational": 3, 00:17:36.602 "base_bdevs_list": [ 00:17:36.602 { 00:17:36.602 "name": null, 00:17:36.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.602 "is_configured": false, 00:17:36.602 "data_offset": 2048, 00:17:36.602 "data_size": 63488 00:17:36.602 }, 00:17:36.602 { 00:17:36.602 "name": "BaseBdev2", 00:17:36.602 "uuid": "95b144e0-d8c5-4511-b004-b2bf6d9d04a3", 00:17:36.602 "is_configured": true, 00:17:36.602 "data_offset": 2048, 00:17:36.602 "data_size": 63488 00:17:36.602 }, 00:17:36.602 { 00:17:36.602 "name": "BaseBdev3", 00:17:36.602 "uuid": "532f32b6-09ac-4c1c-8950-abeb7be44609", 00:17:36.602 "is_configured": true, 00:17:36.602 "data_offset": 2048, 00:17:36.602 "data_size": 63488 00:17:36.602 }, 00:17:36.602 { 00:17:36.602 "name": "BaseBdev4", 00:17:36.602 "uuid": "6bda9dea-b7c7-4d9f-b6c7-60919489d36a", 00:17:36.602 "is_configured": true, 00:17:36.602 "data_offset": 2048, 00:17:36.602 "data_size": 63488 00:17:36.602 } 00:17:36.602 ] 00:17:36.602 }' 00:17:36.602 21:13:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.602 21:13:59 -- common/autotest_common.sh@10 -- # set +x 00:17:37.169 21:13:59 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:37.169 21:13:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:37.169 21:13:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.169 21:13:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:37.427 21:13:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:37.427 21:13:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:37.427 21:13:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:37.686 [2024-06-07 21:14:00.210272] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:37.686 21:14:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:37.686 21:14:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:37.686 21:14:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.686 21:14:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:37.945 21:14:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:37.945 21:14:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:37.945 21:14:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:38.203 [2024-06-07 21:14:00.629771] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:38.203 21:14:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:38.203 21:14:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:38.203 21:14:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:38.203 21:14:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.203 21:14:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:38.203 21:14:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:38.203 21:14:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:38.462 [2024-06-07 21:14:01.051877] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:38.462 [2024-06-07 21:14:01.051977] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:38.462 21:14:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:38.462 21:14:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:38.462 21:14:01 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.462 21:14:01 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:38.721 21:14:01 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:38.721 21:14:01 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:38.721 21:14:01 -- bdev/bdev_raid.sh@287 -- # killprocess 133463 00:17:38.721 21:14:01 -- common/autotest_common.sh@926 -- # '[' -z 133463 ']' 00:17:38.721 21:14:01 -- common/autotest_common.sh@930 -- # kill -0 133463 00:17:38.721 21:14:01 -- common/autotest_common.sh@931 -- # uname 00:17:38.721 21:14:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:38.721 21:14:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133463 00:17:38.721 killing process with pid 133463 00:17:38.721 21:14:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:38.721 21:14:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:38.721 21:14:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133463' 00:17:38.721 21:14:01 -- common/autotest_common.sh@945 -- # kill 133463 00:17:38.721 21:14:01 -- common/autotest_common.sh@950 -- # wait 133463 00:17:38.721 [2024-06-07 21:14:01.306806] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:38.721 [2024-06-07 21:14:01.306931] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:38.979 ************************************ 00:17:38.979 END TEST raid_state_function_test_sb 00:17:38.979 ************************************ 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:38.979 00:17:38.979 real 0m14.247s 00:17:38.979 user 0m26.673s 00:17:38.979 sys 0m1.633s 00:17:38.979 21:14:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:38.979 21:14:01 -- common/autotest_common.sh@10 -- # set +x 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:17:38.979 21:14:01 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:38.979 21:14:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:38.979 21:14:01 -- common/autotest_common.sh@10 -- # set +x 00:17:38.979 ************************************ 00:17:38.979 START TEST raid_superblock_test 00:17:38.979 ************************************ 00:17:38.979 21:14:01 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@357 -- # raid_pid=133931 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@358 -- # waitforlisten 133931 /var/tmp/spdk-raid.sock 00:17:38.979 21:14:01 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:38.979 21:14:01 -- common/autotest_common.sh@819 -- # '[' -z 133931 ']' 00:17:38.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:38.979 21:14:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:38.979 21:14:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:38.979 21:14:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:38.980 21:14:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:38.980 21:14:01 -- common/autotest_common.sh@10 -- # set +x 00:17:38.980 [2024-06-07 21:14:01.640938] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:38.980 [2024-06-07 21:14:01.641774] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133931 ] 00:17:39.238 [2024-06-07 21:14:01.796069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.238 [2024-06-07 21:14:01.882930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.496 [2024-06-07 21:14:01.936917] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:40.063 21:14:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:40.063 21:14:02 -- common/autotest_common.sh@852 -- # return 0 00:17:40.063 21:14:02 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:40.063 21:14:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:40.063 21:14:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:40.063 21:14:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:40.063 21:14:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:40.063 21:14:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:40.063 21:14:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:40.063 21:14:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:40.063 21:14:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:40.321 malloc1 00:17:40.322 21:14:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:40.584 [2024-06-07 21:14:02.999686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:40.584 [2024-06-07 21:14:02.999863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.584 [2024-06-07 21:14:02.999910] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:40.584 [2024-06-07 21:14:02.999962] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.584 [2024-06-07 21:14:03.002577] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.584 [2024-06-07 21:14:03.002642] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:40.584 pt1 00:17:40.584 21:14:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:40.584 21:14:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:40.584 21:14:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:40.584 21:14:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:40.584 21:14:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:40.584 21:14:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:40.584 21:14:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:40.584 21:14:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:40.584 21:14:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:40.584 malloc2 00:17:40.584 21:14:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:40.843 [2024-06-07 21:14:03.422738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:40.843 [2024-06-07 21:14:03.422843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.843 [2024-06-07 21:14:03.422884] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:40.843 [2024-06-07 21:14:03.422984] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.843 [2024-06-07 21:14:03.425133] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.843 [2024-06-07 21:14:03.425198] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:40.843 pt2 00:17:40.843 21:14:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:40.843 21:14:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:40.843 21:14:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:40.843 21:14:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:40.843 21:14:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:40.843 21:14:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:40.843 21:14:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:40.843 21:14:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:40.843 21:14:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:41.101 malloc3 00:17:41.101 21:14:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:41.360 [2024-06-07 21:14:03.851804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:41.360 [2024-06-07 21:14:03.851923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.360 [2024-06-07 21:14:03.851966] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:41.360 [2024-06-07 21:14:03.852057] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.360 [2024-06-07 21:14:03.854384] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.360 [2024-06-07 21:14:03.854451] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:41.360 pt3 00:17:41.360 21:14:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:41.360 21:14:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:41.360 21:14:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:17:41.360 21:14:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:17:41.360 21:14:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:41.360 21:14:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:41.360 21:14:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:41.360 21:14:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:41.360 21:14:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:41.619 malloc4 00:17:41.619 21:14:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:41.619 [2024-06-07 21:14:04.262809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:41.619 [2024-06-07 21:14:04.262925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.619 [2024-06-07 21:14:04.262966] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:41.619 [2024-06-07 21:14:04.263057] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.619 [2024-06-07 21:14:04.265292] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.619 [2024-06-07 21:14:04.265358] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:41.619 pt4 00:17:41.619 21:14:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:41.619 21:14:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:41.619 21:14:04 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:41.877 [2024-06-07 21:14:04.454964] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:41.877 [2024-06-07 21:14:04.456960] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:41.877 [2024-06-07 21:14:04.457057] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:41.877 [2024-06-07 21:14:04.457183] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:41.877 [2024-06-07 21:14:04.457427] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:17:41.877 [2024-06-07 21:14:04.457451] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:41.877 [2024-06-07 21:14:04.457584] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:41.877 [2024-06-07 21:14:04.458036] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:17:41.877 [2024-06-07 21:14:04.458057] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:17:41.877 [2024-06-07 21:14:04.458237] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.877 21:14:04 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:41.877 21:14:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:41.877 21:14:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:41.877 21:14:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:41.877 21:14:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:41.877 21:14:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:41.877 21:14:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:41.877 21:14:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:41.877 21:14:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:41.877 21:14:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:41.877 21:14:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.877 21:14:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.136 21:14:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.136 "name": "raid_bdev1", 00:17:42.136 "uuid": "151222c2-56ae-4246-bbb8-df7927d33d39", 00:17:42.136 "strip_size_kb": 64, 00:17:42.136 "state": "online", 00:17:42.136 "raid_level": "concat", 00:17:42.136 "superblock": true, 00:17:42.136 "num_base_bdevs": 4, 00:17:42.136 "num_base_bdevs_discovered": 4, 00:17:42.136 "num_base_bdevs_operational": 4, 00:17:42.136 "base_bdevs_list": [ 00:17:42.136 { 00:17:42.136 "name": "pt1", 00:17:42.136 "uuid": "2eb4e8d4-d6ba-57d1-a85c-ab3ad79a95a9", 00:17:42.136 "is_configured": true, 00:17:42.136 "data_offset": 2048, 00:17:42.136 "data_size": 63488 00:17:42.136 }, 00:17:42.136 { 00:17:42.136 "name": "pt2", 00:17:42.136 "uuid": "1ba1275b-4d0f-5015-b6bc-646ec5c4e382", 00:17:42.136 "is_configured": true, 00:17:42.136 "data_offset": 2048, 00:17:42.136 "data_size": 63488 00:17:42.136 }, 00:17:42.136 { 00:17:42.136 "name": "pt3", 00:17:42.136 "uuid": "d8ce8b50-8457-550f-9443-37708e7a12b8", 00:17:42.136 "is_configured": true, 00:17:42.136 "data_offset": 2048, 00:17:42.136 "data_size": 63488 00:17:42.136 }, 00:17:42.136 { 00:17:42.136 "name": "pt4", 00:17:42.136 "uuid": "eec7a522-d443-5de8-8d5b-368d5a35057c", 00:17:42.136 "is_configured": true, 00:17:42.136 "data_offset": 2048, 00:17:42.136 "data_size": 63488 00:17:42.136 } 00:17:42.136 ] 00:17:42.136 }' 00:17:42.136 21:14:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.136 21:14:04 -- common/autotest_common.sh@10 -- # set +x 00:17:43.071 21:14:05 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:43.071 21:14:05 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:43.071 [2024-06-07 21:14:05.635558] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.071 21:14:05 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=151222c2-56ae-4246-bbb8-df7927d33d39 00:17:43.071 21:14:05 -- bdev/bdev_raid.sh@380 -- # '[' -z 151222c2-56ae-4246-bbb8-df7927d33d39 ']' 00:17:43.071 21:14:05 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:43.329 [2024-06-07 21:14:05.887237] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.329 [2024-06-07 21:14:05.887276] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.329 [2024-06-07 21:14:05.887439] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.329 [2024-06-07 21:14:05.887572] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.329 [2024-06-07 21:14:05.887602] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:17:43.329 21:14:05 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.329 21:14:05 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:43.588 21:14:06 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:43.588 21:14:06 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:43.588 21:14:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:43.588 21:14:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:43.846 21:14:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:43.846 21:14:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:44.108 21:14:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:44.108 21:14:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:44.108 21:14:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:44.108 21:14:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:44.367 21:14:06 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:44.367 21:14:06 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:44.626 21:14:07 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:44.626 21:14:07 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:44.626 21:14:07 -- common/autotest_common.sh@640 -- # local es=0 00:17:44.626 21:14:07 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:44.626 21:14:07 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.626 21:14:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:44.626 21:14:07 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.626 21:14:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:44.626 21:14:07 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.626 21:14:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:44.626 21:14:07 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.626 21:14:07 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:44.626 21:14:07 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:44.884 [2024-06-07 21:14:07.363467] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:44.884 [2024-06-07 21:14:07.365333] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:44.884 [2024-06-07 21:14:07.365415] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:44.884 [2024-06-07 21:14:07.365455] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:44.884 [2024-06-07 21:14:07.365507] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:44.884 [2024-06-07 21:14:07.365617] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:44.884 [2024-06-07 21:14:07.365688] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:44.884 [2024-06-07 21:14:07.365744] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:44.884 [2024-06-07 21:14:07.365770] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.884 [2024-06-07 21:14:07.365780] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:17:44.884 request: 00:17:44.884 { 00:17:44.884 "name": "raid_bdev1", 00:17:44.884 "raid_level": "concat", 00:17:44.884 "base_bdevs": [ 00:17:44.884 "malloc1", 00:17:44.884 "malloc2", 00:17:44.884 "malloc3", 00:17:44.884 "malloc4" 00:17:44.884 ], 00:17:44.884 "superblock": false, 00:17:44.884 "strip_size_kb": 64, 00:17:44.884 "method": "bdev_raid_create", 00:17:44.884 "req_id": 1 00:17:44.884 } 00:17:44.884 Got JSON-RPC error response 00:17:44.884 response: 00:17:44.884 { 00:17:44.884 "code": -17, 00:17:44.884 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:44.884 } 00:17:44.884 21:14:07 -- common/autotest_common.sh@643 -- # es=1 00:17:44.884 21:14:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:44.884 21:14:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:44.884 21:14:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:44.884 21:14:07 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.885 21:14:07 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:45.144 21:14:07 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:45.144 21:14:07 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:45.144 21:14:07 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:45.144 [2024-06-07 21:14:07.807622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:45.144 [2024-06-07 21:14:07.807734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.144 [2024-06-07 21:14:07.807768] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:45.144 [2024-06-07 21:14:07.807795] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.144 [2024-06-07 21:14:07.810029] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.144 [2024-06-07 21:14:07.810109] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:45.144 [2024-06-07 21:14:07.810208] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:45.144 [2024-06-07 21:14:07.810276] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:45.144 pt1 00:17:45.402 21:14:07 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:45.402 21:14:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:45.402 21:14:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:45.402 21:14:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:45.402 21:14:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:45.402 21:14:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:45.402 21:14:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:45.402 21:14:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:45.402 21:14:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:45.402 21:14:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:45.402 21:14:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.402 21:14:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.402 21:14:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:45.402 "name": "raid_bdev1", 00:17:45.402 "uuid": "151222c2-56ae-4246-bbb8-df7927d33d39", 00:17:45.402 "strip_size_kb": 64, 00:17:45.402 "state": "configuring", 00:17:45.402 "raid_level": "concat", 00:17:45.402 "superblock": true, 00:17:45.402 "num_base_bdevs": 4, 00:17:45.402 "num_base_bdevs_discovered": 1, 00:17:45.402 "num_base_bdevs_operational": 4, 00:17:45.402 "base_bdevs_list": [ 00:17:45.402 { 00:17:45.402 "name": "pt1", 00:17:45.402 "uuid": "2eb4e8d4-d6ba-57d1-a85c-ab3ad79a95a9", 00:17:45.402 "is_configured": true, 00:17:45.402 "data_offset": 2048, 00:17:45.402 "data_size": 63488 00:17:45.402 }, 00:17:45.402 { 00:17:45.402 "name": null, 00:17:45.402 "uuid": "1ba1275b-4d0f-5015-b6bc-646ec5c4e382", 00:17:45.402 "is_configured": false, 00:17:45.402 "data_offset": 2048, 00:17:45.402 "data_size": 63488 00:17:45.402 }, 00:17:45.402 { 00:17:45.402 "name": null, 00:17:45.402 "uuid": "d8ce8b50-8457-550f-9443-37708e7a12b8", 00:17:45.402 "is_configured": false, 00:17:45.402 "data_offset": 2048, 00:17:45.402 "data_size": 63488 00:17:45.402 }, 00:17:45.402 { 00:17:45.402 "name": null, 00:17:45.402 "uuid": "eec7a522-d443-5de8-8d5b-368d5a35057c", 00:17:45.402 "is_configured": false, 00:17:45.402 "data_offset": 2048, 00:17:45.402 "data_size": 63488 00:17:45.402 } 00:17:45.402 ] 00:17:45.402 }' 00:17:45.402 21:14:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:45.402 21:14:08 -- common/autotest_common.sh@10 -- # set +x 00:17:46.336 21:14:08 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:46.336 21:14:08 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:46.336 [2024-06-07 21:14:08.883954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:46.336 [2024-06-07 21:14:08.884064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.336 [2024-06-07 21:14:08.884105] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:46.336 [2024-06-07 21:14:08.884126] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.336 [2024-06-07 21:14:08.884621] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.336 [2024-06-07 21:14:08.884675] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:46.336 [2024-06-07 21:14:08.884792] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:46.336 [2024-06-07 21:14:08.884821] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:46.336 pt2 00:17:46.336 21:14:08 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:46.594 [2024-06-07 21:14:09.091987] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:46.594 21:14:09 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:46.594 21:14:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:46.594 21:14:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:46.594 21:14:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:46.594 21:14:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:46.594 21:14:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:46.594 21:14:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.594 21:14:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.594 21:14:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.594 21:14:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.594 21:14:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.594 21:14:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.851 21:14:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.851 "name": "raid_bdev1", 00:17:46.851 "uuid": "151222c2-56ae-4246-bbb8-df7927d33d39", 00:17:46.851 "strip_size_kb": 64, 00:17:46.851 "state": "configuring", 00:17:46.851 "raid_level": "concat", 00:17:46.851 "superblock": true, 00:17:46.851 "num_base_bdevs": 4, 00:17:46.851 "num_base_bdevs_discovered": 1, 00:17:46.851 "num_base_bdevs_operational": 4, 00:17:46.851 "base_bdevs_list": [ 00:17:46.851 { 00:17:46.851 "name": "pt1", 00:17:46.851 "uuid": "2eb4e8d4-d6ba-57d1-a85c-ab3ad79a95a9", 00:17:46.851 "is_configured": true, 00:17:46.851 "data_offset": 2048, 00:17:46.851 "data_size": 63488 00:17:46.851 }, 00:17:46.851 { 00:17:46.851 "name": null, 00:17:46.851 "uuid": "1ba1275b-4d0f-5015-b6bc-646ec5c4e382", 00:17:46.851 "is_configured": false, 00:17:46.851 "data_offset": 2048, 00:17:46.851 "data_size": 63488 00:17:46.851 }, 00:17:46.851 { 00:17:46.851 "name": null, 00:17:46.851 "uuid": "d8ce8b50-8457-550f-9443-37708e7a12b8", 00:17:46.851 "is_configured": false, 00:17:46.851 "data_offset": 2048, 00:17:46.851 "data_size": 63488 00:17:46.851 }, 00:17:46.851 { 00:17:46.851 "name": null, 00:17:46.851 "uuid": "eec7a522-d443-5de8-8d5b-368d5a35057c", 00:17:46.851 "is_configured": false, 00:17:46.851 "data_offset": 2048, 00:17:46.851 "data_size": 63488 00:17:46.851 } 00:17:46.851 ] 00:17:46.851 }' 00:17:46.851 21:14:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.851 21:14:09 -- common/autotest_common.sh@10 -- # set +x 00:17:47.416 21:14:10 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:47.416 21:14:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:47.416 21:14:10 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:47.674 [2024-06-07 21:14:10.316294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:47.674 [2024-06-07 21:14:10.316422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.674 [2024-06-07 21:14:10.316464] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:47.674 [2024-06-07 21:14:10.316486] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.674 [2024-06-07 21:14:10.317034] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.674 [2024-06-07 21:14:10.317116] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:47.674 [2024-06-07 21:14:10.317217] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:47.674 [2024-06-07 21:14:10.317246] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:47.674 pt2 00:17:47.674 21:14:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:47.674 21:14:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:47.674 21:14:10 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:47.931 [2024-06-07 21:14:10.524303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:47.931 [2024-06-07 21:14:10.524404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.931 [2024-06-07 21:14:10.524435] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:47.931 [2024-06-07 21:14:10.524460] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.931 [2024-06-07 21:14:10.524932] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.931 [2024-06-07 21:14:10.525008] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:47.931 [2024-06-07 21:14:10.525085] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:47.931 [2024-06-07 21:14:10.525112] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:47.931 pt3 00:17:47.931 21:14:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:47.931 21:14:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:47.932 21:14:10 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:48.190 [2024-06-07 21:14:10.724406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:48.190 [2024-06-07 21:14:10.724529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.190 [2024-06-07 21:14:10.724581] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:48.190 [2024-06-07 21:14:10.724609] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.190 [2024-06-07 21:14:10.725169] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.190 [2024-06-07 21:14:10.725231] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:48.190 [2024-06-07 21:14:10.725367] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:48.190 [2024-06-07 21:14:10.725397] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:48.190 [2024-06-07 21:14:10.725553] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:17:48.190 [2024-06-07 21:14:10.725576] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:48.190 [2024-06-07 21:14:10.725656] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:48.190 [2024-06-07 21:14:10.725975] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:17:48.190 [2024-06-07 21:14:10.725998] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:17:48.190 [2024-06-07 21:14:10.726099] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.190 pt4 00:17:48.190 21:14:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:48.190 21:14:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:48.190 21:14:10 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:48.190 21:14:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:48.190 21:14:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:48.190 21:14:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:48.190 21:14:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:48.190 21:14:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:48.190 21:14:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.190 21:14:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.190 21:14:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.190 21:14:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.190 21:14:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.190 21:14:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.448 21:14:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.448 "name": "raid_bdev1", 00:17:48.448 "uuid": "151222c2-56ae-4246-bbb8-df7927d33d39", 00:17:48.448 "strip_size_kb": 64, 00:17:48.448 "state": "online", 00:17:48.448 "raid_level": "concat", 00:17:48.448 "superblock": true, 00:17:48.448 "num_base_bdevs": 4, 00:17:48.448 "num_base_bdevs_discovered": 4, 00:17:48.448 "num_base_bdevs_operational": 4, 00:17:48.448 "base_bdevs_list": [ 00:17:48.448 { 00:17:48.448 "name": "pt1", 00:17:48.448 "uuid": "2eb4e8d4-d6ba-57d1-a85c-ab3ad79a95a9", 00:17:48.448 "is_configured": true, 00:17:48.448 "data_offset": 2048, 00:17:48.448 "data_size": 63488 00:17:48.448 }, 00:17:48.448 { 00:17:48.448 "name": "pt2", 00:17:48.448 "uuid": "1ba1275b-4d0f-5015-b6bc-646ec5c4e382", 00:17:48.448 "is_configured": true, 00:17:48.448 "data_offset": 2048, 00:17:48.448 "data_size": 63488 00:17:48.448 }, 00:17:48.448 { 00:17:48.448 "name": "pt3", 00:17:48.448 "uuid": "d8ce8b50-8457-550f-9443-37708e7a12b8", 00:17:48.448 "is_configured": true, 00:17:48.448 "data_offset": 2048, 00:17:48.448 "data_size": 63488 00:17:48.448 }, 00:17:48.448 { 00:17:48.448 "name": "pt4", 00:17:48.448 "uuid": "eec7a522-d443-5de8-8d5b-368d5a35057c", 00:17:48.448 "is_configured": true, 00:17:48.448 "data_offset": 2048, 00:17:48.448 "data_size": 63488 00:17:48.448 } 00:17:48.448 ] 00:17:48.448 }' 00:17:48.448 21:14:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.448 21:14:10 -- common/autotest_common.sh@10 -- # set +x 00:17:49.014 21:14:11 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:49.014 21:14:11 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:49.273 [2024-06-07 21:14:11.840804] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.273 21:14:11 -- bdev/bdev_raid.sh@430 -- # '[' 151222c2-56ae-4246-bbb8-df7927d33d39 '!=' 151222c2-56ae-4246-bbb8-df7927d33d39 ']' 00:17:49.273 21:14:11 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:49.273 21:14:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:49.273 21:14:11 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:49.273 21:14:11 -- bdev/bdev_raid.sh@511 -- # killprocess 133931 00:17:49.273 21:14:11 -- common/autotest_common.sh@926 -- # '[' -z 133931 ']' 00:17:49.273 21:14:11 -- common/autotest_common.sh@930 -- # kill -0 133931 00:17:49.273 21:14:11 -- common/autotest_common.sh@931 -- # uname 00:17:49.273 21:14:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:49.273 21:14:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133931 00:17:49.273 killing process with pid 133931 00:17:49.273 21:14:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:49.273 21:14:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:49.273 21:14:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133931' 00:17:49.273 21:14:11 -- common/autotest_common.sh@945 -- # kill 133931 00:17:49.273 21:14:11 -- common/autotest_common.sh@950 -- # wait 133931 00:17:49.273 [2024-06-07 21:14:11.876097] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:49.273 [2024-06-07 21:14:11.876185] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.273 [2024-06-07 21:14:11.876273] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.273 [2024-06-07 21:14:11.876293] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:17:49.273 [2024-06-07 21:14:11.916298] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:49.531 ************************************ 00:17:49.531 END TEST raid_superblock_test 00:17:49.531 ************************************ 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:49.531 00:17:49.531 real 0m10.544s 00:17:49.531 user 0m19.437s 00:17:49.531 sys 0m1.243s 00:17:49.531 21:14:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.531 21:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:17:49.531 21:14:12 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:49.531 21:14:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:49.531 21:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:49.531 ************************************ 00:17:49.531 START TEST raid_state_function_test 00:17:49.531 ************************************ 00:17:49.531 21:14:12 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:49.531 21:14:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:49.790 21:14:12 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:49.790 21:14:12 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:49.790 21:14:12 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:49.790 21:14:12 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:49.790 21:14:12 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:49.790 21:14:12 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:49.790 21:14:12 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:49.790 21:14:12 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:49.790 21:14:12 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:49.790 21:14:12 -- bdev/bdev_raid.sh@226 -- # raid_pid=134259 00:17:49.790 Process raid pid: 134259 00:17:49.790 21:14:12 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 134259' 00:17:49.790 21:14:12 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:49.790 21:14:12 -- bdev/bdev_raid.sh@228 -- # waitforlisten 134259 /var/tmp/spdk-raid.sock 00:17:49.790 21:14:12 -- common/autotest_common.sh@819 -- # '[' -z 134259 ']' 00:17:49.790 21:14:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:49.790 21:14:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:49.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:49.790 21:14:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:49.790 21:14:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:49.790 21:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:49.790 [2024-06-07 21:14:12.248662] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:49.790 [2024-06-07 21:14:12.248867] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.790 [2024-06-07 21:14:12.404658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.048 [2024-06-07 21:14:12.470644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.048 [2024-06-07 21:14:12.523211] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.612 21:14:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:50.612 21:14:13 -- common/autotest_common.sh@852 -- # return 0 00:17:50.612 21:14:13 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:50.870 [2024-06-07 21:14:13.391482] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:50.870 [2024-06-07 21:14:13.391587] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:50.870 [2024-06-07 21:14:13.391617] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:50.870 [2024-06-07 21:14:13.391638] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:50.870 [2024-06-07 21:14:13.391645] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:50.870 [2024-06-07 21:14:13.391683] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:50.870 [2024-06-07 21:14:13.391691] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:50.870 [2024-06-07 21:14:13.391713] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:50.870 21:14:13 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:50.870 21:14:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:50.870 21:14:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:50.870 21:14:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:50.870 21:14:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:50.870 21:14:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:50.870 21:14:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:50.870 21:14:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:50.870 21:14:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:50.870 21:14:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:50.870 21:14:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.870 21:14:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.128 21:14:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:51.128 "name": "Existed_Raid", 00:17:51.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.128 "strip_size_kb": 0, 00:17:51.128 "state": "configuring", 00:17:51.128 "raid_level": "raid1", 00:17:51.128 "superblock": false, 00:17:51.128 "num_base_bdevs": 4, 00:17:51.128 "num_base_bdevs_discovered": 0, 00:17:51.128 "num_base_bdevs_operational": 4, 00:17:51.128 "base_bdevs_list": [ 00:17:51.128 { 00:17:51.128 "name": "BaseBdev1", 00:17:51.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.128 "is_configured": false, 00:17:51.128 "data_offset": 0, 00:17:51.128 "data_size": 0 00:17:51.128 }, 00:17:51.128 { 00:17:51.128 "name": "BaseBdev2", 00:17:51.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.128 "is_configured": false, 00:17:51.128 "data_offset": 0, 00:17:51.128 "data_size": 0 00:17:51.128 }, 00:17:51.128 { 00:17:51.128 "name": "BaseBdev3", 00:17:51.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.128 "is_configured": false, 00:17:51.128 "data_offset": 0, 00:17:51.128 "data_size": 0 00:17:51.128 }, 00:17:51.128 { 00:17:51.128 "name": "BaseBdev4", 00:17:51.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.128 "is_configured": false, 00:17:51.128 "data_offset": 0, 00:17:51.128 "data_size": 0 00:17:51.128 } 00:17:51.128 ] 00:17:51.128 }' 00:17:51.128 21:14:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:51.128 21:14:13 -- common/autotest_common.sh@10 -- # set +x 00:17:51.694 21:14:14 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:51.953 [2024-06-07 21:14:14.507587] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:51.953 [2024-06-07 21:14:14.507649] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:51.953 21:14:14 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:52.211 [2024-06-07 21:14:14.771757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:52.211 [2024-06-07 21:14:14.771837] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:52.211 [2024-06-07 21:14:14.771864] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.211 [2024-06-07 21:14:14.771898] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.211 [2024-06-07 21:14:14.771907] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:52.211 [2024-06-07 21:14:14.771944] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:52.211 [2024-06-07 21:14:14.771953] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:52.211 [2024-06-07 21:14:14.771991] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:52.212 21:14:14 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:52.470 [2024-06-07 21:14:15.026813] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.470 BaseBdev1 00:17:52.470 21:14:15 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:52.470 21:14:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:52.470 21:14:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:52.470 21:14:15 -- common/autotest_common.sh@889 -- # local i 00:17:52.470 21:14:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:52.470 21:14:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:52.471 21:14:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:52.729 21:14:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:52.988 [ 00:17:52.988 { 00:17:52.988 "name": "BaseBdev1", 00:17:52.988 "aliases": [ 00:17:52.988 "4d10ddac-c07d-4423-ba89-39facc18df10" 00:17:52.988 ], 00:17:52.988 "product_name": "Malloc disk", 00:17:52.988 "block_size": 512, 00:17:52.988 "num_blocks": 65536, 00:17:52.989 "uuid": "4d10ddac-c07d-4423-ba89-39facc18df10", 00:17:52.989 "assigned_rate_limits": { 00:17:52.989 "rw_ios_per_sec": 0, 00:17:52.989 "rw_mbytes_per_sec": 0, 00:17:52.989 "r_mbytes_per_sec": 0, 00:17:52.989 "w_mbytes_per_sec": 0 00:17:52.989 }, 00:17:52.989 "claimed": true, 00:17:52.989 "claim_type": "exclusive_write", 00:17:52.989 "zoned": false, 00:17:52.989 "supported_io_types": { 00:17:52.989 "read": true, 00:17:52.989 "write": true, 00:17:52.989 "unmap": true, 00:17:52.989 "write_zeroes": true, 00:17:52.989 "flush": true, 00:17:52.989 "reset": true, 00:17:52.989 "compare": false, 00:17:52.989 "compare_and_write": false, 00:17:52.989 "abort": true, 00:17:52.989 "nvme_admin": false, 00:17:52.989 "nvme_io": false 00:17:52.989 }, 00:17:52.989 "memory_domains": [ 00:17:52.989 { 00:17:52.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.989 "dma_device_type": 2 00:17:52.989 } 00:17:52.989 ], 00:17:52.989 "driver_specific": {} 00:17:52.989 } 00:17:52.989 ] 00:17:52.989 21:14:15 -- common/autotest_common.sh@895 -- # return 0 00:17:52.989 21:14:15 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:52.989 21:14:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:52.989 21:14:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:52.989 21:14:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:52.989 21:14:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:52.989 21:14:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:52.989 21:14:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.989 21:14:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.989 21:14:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.989 21:14:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.989 21:14:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.989 21:14:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.247 21:14:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:53.247 "name": "Existed_Raid", 00:17:53.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.247 "strip_size_kb": 0, 00:17:53.247 "state": "configuring", 00:17:53.247 "raid_level": "raid1", 00:17:53.247 "superblock": false, 00:17:53.247 "num_base_bdevs": 4, 00:17:53.247 "num_base_bdevs_discovered": 1, 00:17:53.247 "num_base_bdevs_operational": 4, 00:17:53.247 "base_bdevs_list": [ 00:17:53.247 { 00:17:53.247 "name": "BaseBdev1", 00:17:53.247 "uuid": "4d10ddac-c07d-4423-ba89-39facc18df10", 00:17:53.247 "is_configured": true, 00:17:53.247 "data_offset": 0, 00:17:53.247 "data_size": 65536 00:17:53.247 }, 00:17:53.247 { 00:17:53.247 "name": "BaseBdev2", 00:17:53.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.247 "is_configured": false, 00:17:53.247 "data_offset": 0, 00:17:53.247 "data_size": 0 00:17:53.247 }, 00:17:53.247 { 00:17:53.247 "name": "BaseBdev3", 00:17:53.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.247 "is_configured": false, 00:17:53.247 "data_offset": 0, 00:17:53.247 "data_size": 0 00:17:53.247 }, 00:17:53.247 { 00:17:53.247 "name": "BaseBdev4", 00:17:53.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.247 "is_configured": false, 00:17:53.247 "data_offset": 0, 00:17:53.247 "data_size": 0 00:17:53.247 } 00:17:53.247 ] 00:17:53.247 }' 00:17:53.247 21:14:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:53.247 21:14:15 -- common/autotest_common.sh@10 -- # set +x 00:17:53.814 21:14:16 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:54.073 [2024-06-07 21:14:16.667255] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:54.073 [2024-06-07 21:14:16.667340] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:54.073 21:14:16 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:54.073 21:14:16 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:54.332 [2024-06-07 21:14:16.915373] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.332 [2024-06-07 21:14:16.917383] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.332 [2024-06-07 21:14:16.917479] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.332 [2024-06-07 21:14:16.917507] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:54.332 [2024-06-07 21:14:16.917531] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:54.332 [2024-06-07 21:14:16.917539] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:54.332 [2024-06-07 21:14:16.917556] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:54.332 21:14:16 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:54.332 21:14:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:54.332 21:14:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:54.332 21:14:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:54.332 21:14:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:54.332 21:14:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:54.332 21:14:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:54.332 21:14:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:54.332 21:14:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.332 21:14:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.332 21:14:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.332 21:14:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.332 21:14:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.332 21:14:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.590 21:14:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.590 "name": "Existed_Raid", 00:17:54.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.590 "strip_size_kb": 0, 00:17:54.590 "state": "configuring", 00:17:54.590 "raid_level": "raid1", 00:17:54.590 "superblock": false, 00:17:54.590 "num_base_bdevs": 4, 00:17:54.590 "num_base_bdevs_discovered": 1, 00:17:54.590 "num_base_bdevs_operational": 4, 00:17:54.590 "base_bdevs_list": [ 00:17:54.590 { 00:17:54.590 "name": "BaseBdev1", 00:17:54.590 "uuid": "4d10ddac-c07d-4423-ba89-39facc18df10", 00:17:54.590 "is_configured": true, 00:17:54.590 "data_offset": 0, 00:17:54.590 "data_size": 65536 00:17:54.590 }, 00:17:54.590 { 00:17:54.590 "name": "BaseBdev2", 00:17:54.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.590 "is_configured": false, 00:17:54.590 "data_offset": 0, 00:17:54.590 "data_size": 0 00:17:54.590 }, 00:17:54.590 { 00:17:54.590 "name": "BaseBdev3", 00:17:54.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.590 "is_configured": false, 00:17:54.590 "data_offset": 0, 00:17:54.590 "data_size": 0 00:17:54.590 }, 00:17:54.590 { 00:17:54.590 "name": "BaseBdev4", 00:17:54.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.590 "is_configured": false, 00:17:54.590 "data_offset": 0, 00:17:54.590 "data_size": 0 00:17:54.590 } 00:17:54.590 ] 00:17:54.590 }' 00:17:54.590 21:14:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.590 21:14:17 -- common/autotest_common.sh@10 -- # set +x 00:17:55.157 21:14:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:55.416 [2024-06-07 21:14:18.004603] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:55.416 BaseBdev2 00:17:55.416 21:14:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:55.416 21:14:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:55.416 21:14:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:55.416 21:14:18 -- common/autotest_common.sh@889 -- # local i 00:17:55.416 21:14:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:55.416 21:14:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:55.416 21:14:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:55.675 21:14:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:55.934 [ 00:17:55.934 { 00:17:55.934 "name": "BaseBdev2", 00:17:55.934 "aliases": [ 00:17:55.934 "ed90ff18-fa8b-4ab4-9a4b-87658fecc693" 00:17:55.934 ], 00:17:55.934 "product_name": "Malloc disk", 00:17:55.934 "block_size": 512, 00:17:55.934 "num_blocks": 65536, 00:17:55.934 "uuid": "ed90ff18-fa8b-4ab4-9a4b-87658fecc693", 00:17:55.934 "assigned_rate_limits": { 00:17:55.934 "rw_ios_per_sec": 0, 00:17:55.934 "rw_mbytes_per_sec": 0, 00:17:55.934 "r_mbytes_per_sec": 0, 00:17:55.934 "w_mbytes_per_sec": 0 00:17:55.934 }, 00:17:55.934 "claimed": true, 00:17:55.934 "claim_type": "exclusive_write", 00:17:55.934 "zoned": false, 00:17:55.934 "supported_io_types": { 00:17:55.934 "read": true, 00:17:55.934 "write": true, 00:17:55.934 "unmap": true, 00:17:55.934 "write_zeroes": true, 00:17:55.934 "flush": true, 00:17:55.934 "reset": true, 00:17:55.934 "compare": false, 00:17:55.934 "compare_and_write": false, 00:17:55.934 "abort": true, 00:17:55.934 "nvme_admin": false, 00:17:55.934 "nvme_io": false 00:17:55.934 }, 00:17:55.934 "memory_domains": [ 00:17:55.934 { 00:17:55.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.934 "dma_device_type": 2 00:17:55.934 } 00:17:55.934 ], 00:17:55.934 "driver_specific": {} 00:17:55.934 } 00:17:55.934 ] 00:17:55.934 21:14:18 -- common/autotest_common.sh@895 -- # return 0 00:17:55.934 21:14:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:55.934 21:14:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:55.934 21:14:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:55.934 21:14:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:55.934 21:14:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:55.934 21:14:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:55.934 21:14:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:55.934 21:14:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:55.934 21:14:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.934 21:14:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.934 21:14:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.934 21:14:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.934 21:14:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.934 21:14:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.192 21:14:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:56.192 "name": "Existed_Raid", 00:17:56.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.192 "strip_size_kb": 0, 00:17:56.192 "state": "configuring", 00:17:56.192 "raid_level": "raid1", 00:17:56.192 "superblock": false, 00:17:56.192 "num_base_bdevs": 4, 00:17:56.192 "num_base_bdevs_discovered": 2, 00:17:56.192 "num_base_bdevs_operational": 4, 00:17:56.192 "base_bdevs_list": [ 00:17:56.192 { 00:17:56.192 "name": "BaseBdev1", 00:17:56.192 "uuid": "4d10ddac-c07d-4423-ba89-39facc18df10", 00:17:56.192 "is_configured": true, 00:17:56.192 "data_offset": 0, 00:17:56.192 "data_size": 65536 00:17:56.192 }, 00:17:56.192 { 00:17:56.192 "name": "BaseBdev2", 00:17:56.192 "uuid": "ed90ff18-fa8b-4ab4-9a4b-87658fecc693", 00:17:56.192 "is_configured": true, 00:17:56.192 "data_offset": 0, 00:17:56.192 "data_size": 65536 00:17:56.192 }, 00:17:56.192 { 00:17:56.192 "name": "BaseBdev3", 00:17:56.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.192 "is_configured": false, 00:17:56.192 "data_offset": 0, 00:17:56.192 "data_size": 0 00:17:56.192 }, 00:17:56.192 { 00:17:56.192 "name": "BaseBdev4", 00:17:56.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.193 "is_configured": false, 00:17:56.193 "data_offset": 0, 00:17:56.193 "data_size": 0 00:17:56.193 } 00:17:56.193 ] 00:17:56.193 }' 00:17:56.193 21:14:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:56.193 21:14:18 -- common/autotest_common.sh@10 -- # set +x 00:17:56.760 21:14:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:57.018 [2024-06-07 21:14:19.549830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:57.018 BaseBdev3 00:17:57.018 21:14:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:57.018 21:14:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:57.018 21:14:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:57.018 21:14:19 -- common/autotest_common.sh@889 -- # local i 00:17:57.018 21:14:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:57.018 21:14:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:57.018 21:14:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:57.277 21:14:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:57.535 [ 00:17:57.535 { 00:17:57.535 "name": "BaseBdev3", 00:17:57.535 "aliases": [ 00:17:57.535 "69066795-e38d-4a14-962c-5e01efa758cc" 00:17:57.535 ], 00:17:57.535 "product_name": "Malloc disk", 00:17:57.535 "block_size": 512, 00:17:57.535 "num_blocks": 65536, 00:17:57.535 "uuid": "69066795-e38d-4a14-962c-5e01efa758cc", 00:17:57.535 "assigned_rate_limits": { 00:17:57.535 "rw_ios_per_sec": 0, 00:17:57.535 "rw_mbytes_per_sec": 0, 00:17:57.536 "r_mbytes_per_sec": 0, 00:17:57.536 "w_mbytes_per_sec": 0 00:17:57.536 }, 00:17:57.536 "claimed": true, 00:17:57.536 "claim_type": "exclusive_write", 00:17:57.536 "zoned": false, 00:17:57.536 "supported_io_types": { 00:17:57.536 "read": true, 00:17:57.536 "write": true, 00:17:57.536 "unmap": true, 00:17:57.536 "write_zeroes": true, 00:17:57.536 "flush": true, 00:17:57.536 "reset": true, 00:17:57.536 "compare": false, 00:17:57.536 "compare_and_write": false, 00:17:57.536 "abort": true, 00:17:57.536 "nvme_admin": false, 00:17:57.536 "nvme_io": false 00:17:57.536 }, 00:17:57.536 "memory_domains": [ 00:17:57.536 { 00:17:57.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.536 "dma_device_type": 2 00:17:57.536 } 00:17:57.536 ], 00:17:57.536 "driver_specific": {} 00:17:57.536 } 00:17:57.536 ] 00:17:57.536 21:14:19 -- common/autotest_common.sh@895 -- # return 0 00:17:57.536 21:14:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:57.536 21:14:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:57.536 21:14:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:57.536 21:14:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:57.536 21:14:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:57.536 21:14:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:57.536 21:14:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:57.536 21:14:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:57.536 21:14:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.536 21:14:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.536 21:14:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.536 21:14:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.536 21:14:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.536 21:14:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.536 21:14:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.536 "name": "Existed_Raid", 00:17:57.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.536 "strip_size_kb": 0, 00:17:57.536 "state": "configuring", 00:17:57.536 "raid_level": "raid1", 00:17:57.536 "superblock": false, 00:17:57.536 "num_base_bdevs": 4, 00:17:57.536 "num_base_bdevs_discovered": 3, 00:17:57.536 "num_base_bdevs_operational": 4, 00:17:57.536 "base_bdevs_list": [ 00:17:57.536 { 00:17:57.536 "name": "BaseBdev1", 00:17:57.536 "uuid": "4d10ddac-c07d-4423-ba89-39facc18df10", 00:17:57.536 "is_configured": true, 00:17:57.536 "data_offset": 0, 00:17:57.536 "data_size": 65536 00:17:57.536 }, 00:17:57.536 { 00:17:57.536 "name": "BaseBdev2", 00:17:57.536 "uuid": "ed90ff18-fa8b-4ab4-9a4b-87658fecc693", 00:17:57.536 "is_configured": true, 00:17:57.536 "data_offset": 0, 00:17:57.536 "data_size": 65536 00:17:57.536 }, 00:17:57.536 { 00:17:57.536 "name": "BaseBdev3", 00:17:57.536 "uuid": "69066795-e38d-4a14-962c-5e01efa758cc", 00:17:57.536 "is_configured": true, 00:17:57.536 "data_offset": 0, 00:17:57.536 "data_size": 65536 00:17:57.536 }, 00:17:57.536 { 00:17:57.536 "name": "BaseBdev4", 00:17:57.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.536 "is_configured": false, 00:17:57.536 "data_offset": 0, 00:17:57.536 "data_size": 0 00:17:57.536 } 00:17:57.536 ] 00:17:57.536 }' 00:17:57.536 21:14:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.536 21:14:20 -- common/autotest_common.sh@10 -- # set +x 00:17:58.486 21:14:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:58.486 [2024-06-07 21:14:21.062773] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:58.486 [2024-06-07 21:14:21.062853] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:58.486 [2024-06-07 21:14:21.062865] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:58.486 [2024-06-07 21:14:21.063034] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:58.486 [2024-06-07 21:14:21.063560] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:58.486 [2024-06-07 21:14:21.063586] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:58.486 [2024-06-07 21:14:21.063874] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.486 BaseBdev4 00:17:58.486 21:14:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:58.486 21:14:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:58.486 21:14:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:58.486 21:14:21 -- common/autotest_common.sh@889 -- # local i 00:17:58.486 21:14:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:58.486 21:14:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:58.486 21:14:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:58.760 21:14:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:59.019 [ 00:17:59.019 { 00:17:59.019 "name": "BaseBdev4", 00:17:59.019 "aliases": [ 00:17:59.019 "11bdbe40-57da-48e4-9729-706960ce9321" 00:17:59.019 ], 00:17:59.019 "product_name": "Malloc disk", 00:17:59.019 "block_size": 512, 00:17:59.019 "num_blocks": 65536, 00:17:59.019 "uuid": "11bdbe40-57da-48e4-9729-706960ce9321", 00:17:59.019 "assigned_rate_limits": { 00:17:59.019 "rw_ios_per_sec": 0, 00:17:59.019 "rw_mbytes_per_sec": 0, 00:17:59.019 "r_mbytes_per_sec": 0, 00:17:59.019 "w_mbytes_per_sec": 0 00:17:59.019 }, 00:17:59.019 "claimed": true, 00:17:59.019 "claim_type": "exclusive_write", 00:17:59.019 "zoned": false, 00:17:59.019 "supported_io_types": { 00:17:59.019 "read": true, 00:17:59.019 "write": true, 00:17:59.019 "unmap": true, 00:17:59.019 "write_zeroes": true, 00:17:59.019 "flush": true, 00:17:59.019 "reset": true, 00:17:59.019 "compare": false, 00:17:59.019 "compare_and_write": false, 00:17:59.019 "abort": true, 00:17:59.019 "nvme_admin": false, 00:17:59.019 "nvme_io": false 00:17:59.019 }, 00:17:59.019 "memory_domains": [ 00:17:59.019 { 00:17:59.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.019 "dma_device_type": 2 00:17:59.019 } 00:17:59.019 ], 00:17:59.019 "driver_specific": {} 00:17:59.019 } 00:17:59.019 ] 00:17:59.019 21:14:21 -- common/autotest_common.sh@895 -- # return 0 00:17:59.019 21:14:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:59.019 21:14:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:59.019 21:14:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:59.019 21:14:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:59.019 21:14:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:59.019 21:14:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:59.019 21:14:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:59.019 21:14:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:59.019 21:14:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:59.019 21:14:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:59.019 21:14:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:59.019 21:14:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:59.019 21:14:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.019 21:14:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.277 21:14:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:59.277 "name": "Existed_Raid", 00:17:59.277 "uuid": "b44e1d4f-23e6-483f-988d-3ae4ab59c70d", 00:17:59.277 "strip_size_kb": 0, 00:17:59.277 "state": "online", 00:17:59.277 "raid_level": "raid1", 00:17:59.277 "superblock": false, 00:17:59.277 "num_base_bdevs": 4, 00:17:59.277 "num_base_bdevs_discovered": 4, 00:17:59.277 "num_base_bdevs_operational": 4, 00:17:59.277 "base_bdevs_list": [ 00:17:59.277 { 00:17:59.277 "name": "BaseBdev1", 00:17:59.277 "uuid": "4d10ddac-c07d-4423-ba89-39facc18df10", 00:17:59.277 "is_configured": true, 00:17:59.277 "data_offset": 0, 00:17:59.277 "data_size": 65536 00:17:59.277 }, 00:17:59.277 { 00:17:59.277 "name": "BaseBdev2", 00:17:59.277 "uuid": "ed90ff18-fa8b-4ab4-9a4b-87658fecc693", 00:17:59.277 "is_configured": true, 00:17:59.277 "data_offset": 0, 00:17:59.277 "data_size": 65536 00:17:59.277 }, 00:17:59.277 { 00:17:59.277 "name": "BaseBdev3", 00:17:59.277 "uuid": "69066795-e38d-4a14-962c-5e01efa758cc", 00:17:59.277 "is_configured": true, 00:17:59.277 "data_offset": 0, 00:17:59.277 "data_size": 65536 00:17:59.277 }, 00:17:59.277 { 00:17:59.277 "name": "BaseBdev4", 00:17:59.277 "uuid": "11bdbe40-57da-48e4-9729-706960ce9321", 00:17:59.277 "is_configured": true, 00:17:59.277 "data_offset": 0, 00:17:59.277 "data_size": 65536 00:17:59.277 } 00:17:59.277 ] 00:17:59.277 }' 00:17:59.277 21:14:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:59.277 21:14:21 -- common/autotest_common.sh@10 -- # set +x 00:17:59.844 21:14:22 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:00.102 [2024-06-07 21:14:22.699336] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.102 21:14:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.360 21:14:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.360 "name": "Existed_Raid", 00:18:00.361 "uuid": "b44e1d4f-23e6-483f-988d-3ae4ab59c70d", 00:18:00.361 "strip_size_kb": 0, 00:18:00.361 "state": "online", 00:18:00.361 "raid_level": "raid1", 00:18:00.361 "superblock": false, 00:18:00.361 "num_base_bdevs": 4, 00:18:00.361 "num_base_bdevs_discovered": 3, 00:18:00.361 "num_base_bdevs_operational": 3, 00:18:00.361 "base_bdevs_list": [ 00:18:00.361 { 00:18:00.361 "name": null, 00:18:00.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.361 "is_configured": false, 00:18:00.361 "data_offset": 0, 00:18:00.361 "data_size": 65536 00:18:00.361 }, 00:18:00.361 { 00:18:00.361 "name": "BaseBdev2", 00:18:00.361 "uuid": "ed90ff18-fa8b-4ab4-9a4b-87658fecc693", 00:18:00.361 "is_configured": true, 00:18:00.361 "data_offset": 0, 00:18:00.361 "data_size": 65536 00:18:00.361 }, 00:18:00.361 { 00:18:00.361 "name": "BaseBdev3", 00:18:00.361 "uuid": "69066795-e38d-4a14-962c-5e01efa758cc", 00:18:00.361 "is_configured": true, 00:18:00.361 "data_offset": 0, 00:18:00.361 "data_size": 65536 00:18:00.361 }, 00:18:00.361 { 00:18:00.361 "name": "BaseBdev4", 00:18:00.361 "uuid": "11bdbe40-57da-48e4-9729-706960ce9321", 00:18:00.361 "is_configured": true, 00:18:00.361 "data_offset": 0, 00:18:00.361 "data_size": 65536 00:18:00.361 } 00:18:00.361 ] 00:18:00.361 }' 00:18:00.361 21:14:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.361 21:14:22 -- common/autotest_common.sh@10 -- # set +x 00:18:01.304 21:14:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:01.304 21:14:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:01.304 21:14:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.304 21:14:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:01.304 21:14:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:01.304 21:14:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:01.304 21:14:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:01.563 [2024-06-07 21:14:24.130136] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:01.563 21:14:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:01.563 21:14:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:01.563 21:14:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.563 21:14:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:01.821 21:14:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:01.821 21:14:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:01.821 21:14:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:02.080 [2024-06-07 21:14:24.605080] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:02.080 21:14:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:02.080 21:14:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:02.080 21:14:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.080 21:14:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:02.339 21:14:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:02.339 21:14:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:02.339 21:14:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:02.598 [2024-06-07 21:14:25.067100] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:02.598 [2024-06-07 21:14:25.067135] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.598 [2024-06-07 21:14:25.067269] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.598 [2024-06-07 21:14:25.077173] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.598 [2024-06-07 21:14:25.077204] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:02.598 21:14:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:02.598 21:14:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:02.598 21:14:25 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.598 21:14:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:02.856 21:14:25 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:02.856 21:14:25 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:02.856 21:14:25 -- bdev/bdev_raid.sh@287 -- # killprocess 134259 00:18:02.856 21:14:25 -- common/autotest_common.sh@926 -- # '[' -z 134259 ']' 00:18:02.856 21:14:25 -- common/autotest_common.sh@930 -- # kill -0 134259 00:18:02.856 21:14:25 -- common/autotest_common.sh@931 -- # uname 00:18:02.856 21:14:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:02.856 21:14:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134259 00:18:02.856 killing process with pid 134259 00:18:02.856 21:14:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:02.856 21:14:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:02.856 21:14:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134259' 00:18:02.856 21:14:25 -- common/autotest_common.sh@945 -- # kill 134259 00:18:02.856 21:14:25 -- common/autotest_common.sh@950 -- # wait 134259 00:18:02.856 [2024-06-07 21:14:25.354726] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:02.856 [2024-06-07 21:14:25.354845] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:03.115 ************************************ 00:18:03.115 END TEST raid_state_function_test 00:18:03.115 ************************************ 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:03.115 00:18:03.115 real 0m13.397s 00:18:03.115 user 0m25.153s 00:18:03.115 sys 0m1.460s 00:18:03.115 21:14:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.115 21:14:25 -- common/autotest_common.sh@10 -- # set +x 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:18:03.115 21:14:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:03.115 21:14:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:03.115 21:14:25 -- common/autotest_common.sh@10 -- # set +x 00:18:03.115 ************************************ 00:18:03.115 START TEST raid_state_function_test_sb 00:18:03.115 ************************************ 00:18:03.115 21:14:25 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=134725 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 134725' 00:18:03.115 Process raid pid: 134725 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 134725 /var/tmp/spdk-raid.sock 00:18:03.115 21:14:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:03.115 21:14:25 -- common/autotest_common.sh@819 -- # '[' -z 134725 ']' 00:18:03.115 21:14:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:03.115 21:14:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:03.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:03.115 21:14:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:03.115 21:14:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:03.115 21:14:25 -- common/autotest_common.sh@10 -- # set +x 00:18:03.115 [2024-06-07 21:14:25.709138] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:03.115 [2024-06-07 21:14:25.709980] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.374 [2024-06-07 21:14:25.879870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.374 [2024-06-07 21:14:25.947612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.374 [2024-06-07 21:14:26.005490] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.939 21:14:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:03.939 21:14:26 -- common/autotest_common.sh@852 -- # return 0 00:18:03.939 21:14:26 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:04.197 [2024-06-07 21:14:26.808116] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:04.197 [2024-06-07 21:14:26.808207] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:04.197 [2024-06-07 21:14:26.808236] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:04.197 [2024-06-07 21:14:26.808276] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:04.197 [2024-06-07 21:14:26.808284] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:04.197 [2024-06-07 21:14:26.808325] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:04.197 [2024-06-07 21:14:26.808334] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:04.197 [2024-06-07 21:14:26.808358] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:04.197 21:14:26 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:04.197 21:14:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:04.197 21:14:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:04.197 21:14:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:04.197 21:14:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:04.197 21:14:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:04.197 21:14:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:04.197 21:14:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:04.197 21:14:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:04.197 21:14:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:04.197 21:14:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.197 21:14:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.455 21:14:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:04.455 "name": "Existed_Raid", 00:18:04.455 "uuid": "a4f09059-46c2-4b7d-925b-c3596c72be0c", 00:18:04.455 "strip_size_kb": 0, 00:18:04.455 "state": "configuring", 00:18:04.455 "raid_level": "raid1", 00:18:04.455 "superblock": true, 00:18:04.455 "num_base_bdevs": 4, 00:18:04.455 "num_base_bdevs_discovered": 0, 00:18:04.455 "num_base_bdevs_operational": 4, 00:18:04.455 "base_bdevs_list": [ 00:18:04.455 { 00:18:04.455 "name": "BaseBdev1", 00:18:04.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.455 "is_configured": false, 00:18:04.455 "data_offset": 0, 00:18:04.455 "data_size": 0 00:18:04.455 }, 00:18:04.455 { 00:18:04.455 "name": "BaseBdev2", 00:18:04.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.455 "is_configured": false, 00:18:04.455 "data_offset": 0, 00:18:04.455 "data_size": 0 00:18:04.455 }, 00:18:04.455 { 00:18:04.455 "name": "BaseBdev3", 00:18:04.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.455 "is_configured": false, 00:18:04.455 "data_offset": 0, 00:18:04.455 "data_size": 0 00:18:04.455 }, 00:18:04.455 { 00:18:04.455 "name": "BaseBdev4", 00:18:04.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.455 "is_configured": false, 00:18:04.455 "data_offset": 0, 00:18:04.455 "data_size": 0 00:18:04.455 } 00:18:04.455 ] 00:18:04.455 }' 00:18:04.455 21:14:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:04.455 21:14:27 -- common/autotest_common.sh@10 -- # set +x 00:18:05.021 21:14:27 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:05.279 [2024-06-07 21:14:27.933301] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:05.279 [2024-06-07 21:14:27.933369] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:05.279 21:14:27 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:05.540 [2024-06-07 21:14:28.145320] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:05.540 [2024-06-07 21:14:28.145398] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:05.540 [2024-06-07 21:14:28.145427] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:05.541 [2024-06-07 21:14:28.145459] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:05.541 [2024-06-07 21:14:28.145467] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:05.541 [2024-06-07 21:14:28.145502] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:05.541 [2024-06-07 21:14:28.145510] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:05.541 [2024-06-07 21:14:28.145532] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:05.541 21:14:28 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:05.800 [2024-06-07 21:14:28.416281] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.800 BaseBdev1 00:18:05.800 21:14:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:05.800 21:14:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:05.800 21:14:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:05.800 21:14:28 -- common/autotest_common.sh@889 -- # local i 00:18:05.800 21:14:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:05.800 21:14:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:05.800 21:14:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:06.059 21:14:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:06.317 [ 00:18:06.317 { 00:18:06.317 "name": "BaseBdev1", 00:18:06.317 "aliases": [ 00:18:06.317 "161e9bcc-b69e-4a11-b379-5c4599407471" 00:18:06.317 ], 00:18:06.317 "product_name": "Malloc disk", 00:18:06.317 "block_size": 512, 00:18:06.317 "num_blocks": 65536, 00:18:06.317 "uuid": "161e9bcc-b69e-4a11-b379-5c4599407471", 00:18:06.317 "assigned_rate_limits": { 00:18:06.317 "rw_ios_per_sec": 0, 00:18:06.317 "rw_mbytes_per_sec": 0, 00:18:06.317 "r_mbytes_per_sec": 0, 00:18:06.317 "w_mbytes_per_sec": 0 00:18:06.317 }, 00:18:06.317 "claimed": true, 00:18:06.317 "claim_type": "exclusive_write", 00:18:06.317 "zoned": false, 00:18:06.317 "supported_io_types": { 00:18:06.317 "read": true, 00:18:06.317 "write": true, 00:18:06.317 "unmap": true, 00:18:06.317 "write_zeroes": true, 00:18:06.317 "flush": true, 00:18:06.317 "reset": true, 00:18:06.317 "compare": false, 00:18:06.317 "compare_and_write": false, 00:18:06.317 "abort": true, 00:18:06.317 "nvme_admin": false, 00:18:06.317 "nvme_io": false 00:18:06.317 }, 00:18:06.317 "memory_domains": [ 00:18:06.317 { 00:18:06.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.317 "dma_device_type": 2 00:18:06.317 } 00:18:06.317 ], 00:18:06.317 "driver_specific": {} 00:18:06.317 } 00:18:06.317 ] 00:18:06.317 21:14:28 -- common/autotest_common.sh@895 -- # return 0 00:18:06.317 21:14:28 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:06.317 21:14:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:06.317 21:14:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:06.317 21:14:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:06.317 21:14:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:06.317 21:14:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:06.317 21:14:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.317 21:14:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.317 21:14:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.317 21:14:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.317 21:14:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.317 21:14:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.585 21:14:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.585 "name": "Existed_Raid", 00:18:06.585 "uuid": "2cd961e5-eb86-4ce0-8d02-0162c813a0be", 00:18:06.585 "strip_size_kb": 0, 00:18:06.585 "state": "configuring", 00:18:06.585 "raid_level": "raid1", 00:18:06.585 "superblock": true, 00:18:06.585 "num_base_bdevs": 4, 00:18:06.585 "num_base_bdevs_discovered": 1, 00:18:06.585 "num_base_bdevs_operational": 4, 00:18:06.585 "base_bdevs_list": [ 00:18:06.585 { 00:18:06.585 "name": "BaseBdev1", 00:18:06.585 "uuid": "161e9bcc-b69e-4a11-b379-5c4599407471", 00:18:06.585 "is_configured": true, 00:18:06.585 "data_offset": 2048, 00:18:06.585 "data_size": 63488 00:18:06.585 }, 00:18:06.585 { 00:18:06.585 "name": "BaseBdev2", 00:18:06.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.585 "is_configured": false, 00:18:06.585 "data_offset": 0, 00:18:06.585 "data_size": 0 00:18:06.585 }, 00:18:06.585 { 00:18:06.585 "name": "BaseBdev3", 00:18:06.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.585 "is_configured": false, 00:18:06.585 "data_offset": 0, 00:18:06.585 "data_size": 0 00:18:06.585 }, 00:18:06.585 { 00:18:06.585 "name": "BaseBdev4", 00:18:06.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.585 "is_configured": false, 00:18:06.585 "data_offset": 0, 00:18:06.585 "data_size": 0 00:18:06.585 } 00:18:06.585 ] 00:18:06.585 }' 00:18:06.585 21:14:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.585 21:14:29 -- common/autotest_common.sh@10 -- # set +x 00:18:07.173 21:14:29 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:07.431 [2024-06-07 21:14:29.980696] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:07.431 [2024-06-07 21:14:29.980797] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:07.431 21:14:29 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:07.431 21:14:29 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:07.690 21:14:30 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:07.948 BaseBdev1 00:18:07.948 21:14:30 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:07.948 21:14:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:07.948 21:14:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:07.948 21:14:30 -- common/autotest_common.sh@889 -- # local i 00:18:07.948 21:14:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:07.948 21:14:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:07.948 21:14:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:08.206 21:14:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:08.465 [ 00:18:08.465 { 00:18:08.465 "name": "BaseBdev1", 00:18:08.465 "aliases": [ 00:18:08.465 "0a6cc779-761b-4694-a67b-36352a5ed6c4" 00:18:08.465 ], 00:18:08.465 "product_name": "Malloc disk", 00:18:08.465 "block_size": 512, 00:18:08.465 "num_blocks": 65536, 00:18:08.465 "uuid": "0a6cc779-761b-4694-a67b-36352a5ed6c4", 00:18:08.465 "assigned_rate_limits": { 00:18:08.465 "rw_ios_per_sec": 0, 00:18:08.465 "rw_mbytes_per_sec": 0, 00:18:08.465 "r_mbytes_per_sec": 0, 00:18:08.465 "w_mbytes_per_sec": 0 00:18:08.465 }, 00:18:08.465 "claimed": false, 00:18:08.465 "zoned": false, 00:18:08.465 "supported_io_types": { 00:18:08.465 "read": true, 00:18:08.465 "write": true, 00:18:08.465 "unmap": true, 00:18:08.465 "write_zeroes": true, 00:18:08.465 "flush": true, 00:18:08.465 "reset": true, 00:18:08.465 "compare": false, 00:18:08.465 "compare_and_write": false, 00:18:08.465 "abort": true, 00:18:08.465 "nvme_admin": false, 00:18:08.465 "nvme_io": false 00:18:08.465 }, 00:18:08.465 "memory_domains": [ 00:18:08.465 { 00:18:08.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.465 "dma_device_type": 2 00:18:08.465 } 00:18:08.465 ], 00:18:08.465 "driver_specific": {} 00:18:08.465 } 00:18:08.465 ] 00:18:08.465 21:14:30 -- common/autotest_common.sh@895 -- # return 0 00:18:08.465 21:14:30 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:08.465 [2024-06-07 21:14:31.101195] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.465 [2024-06-07 21:14:31.102924] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:08.465 [2024-06-07 21:14:31.102993] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:08.465 [2024-06-07 21:14:31.103021] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:08.465 [2024-06-07 21:14:31.103043] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:08.465 [2024-06-07 21:14:31.103051] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:08.465 [2024-06-07 21:14:31.103066] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:08.465 21:14:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:08.465 21:14:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:08.465 21:14:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:08.465 21:14:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:08.465 21:14:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:08.465 21:14:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:08.465 21:14:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:08.465 21:14:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:08.465 21:14:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:08.465 21:14:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:08.465 21:14:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:08.465 21:14:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:08.465 21:14:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.465 21:14:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.723 21:14:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:08.723 "name": "Existed_Raid", 00:18:08.723 "uuid": "496b72e2-c8eb-4844-8c73-6e522cbad3e3", 00:18:08.723 "strip_size_kb": 0, 00:18:08.723 "state": "configuring", 00:18:08.723 "raid_level": "raid1", 00:18:08.723 "superblock": true, 00:18:08.723 "num_base_bdevs": 4, 00:18:08.723 "num_base_bdevs_discovered": 1, 00:18:08.723 "num_base_bdevs_operational": 4, 00:18:08.723 "base_bdevs_list": [ 00:18:08.723 { 00:18:08.723 "name": "BaseBdev1", 00:18:08.723 "uuid": "0a6cc779-761b-4694-a67b-36352a5ed6c4", 00:18:08.723 "is_configured": true, 00:18:08.723 "data_offset": 2048, 00:18:08.723 "data_size": 63488 00:18:08.723 }, 00:18:08.723 { 00:18:08.723 "name": "BaseBdev2", 00:18:08.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.723 "is_configured": false, 00:18:08.723 "data_offset": 0, 00:18:08.723 "data_size": 0 00:18:08.723 }, 00:18:08.723 { 00:18:08.723 "name": "BaseBdev3", 00:18:08.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.723 "is_configured": false, 00:18:08.723 "data_offset": 0, 00:18:08.723 "data_size": 0 00:18:08.723 }, 00:18:08.723 { 00:18:08.723 "name": "BaseBdev4", 00:18:08.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.723 "is_configured": false, 00:18:08.723 "data_offset": 0, 00:18:08.723 "data_size": 0 00:18:08.723 } 00:18:08.723 ] 00:18:08.723 }' 00:18:08.723 21:14:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:08.723 21:14:31 -- common/autotest_common.sh@10 -- # set +x 00:18:09.658 21:14:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:09.658 [2024-06-07 21:14:32.232620] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:09.658 BaseBdev2 00:18:09.658 21:14:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:09.658 21:14:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:09.658 21:14:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:09.658 21:14:32 -- common/autotest_common.sh@889 -- # local i 00:18:09.658 21:14:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:09.658 21:14:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:09.658 21:14:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:09.916 21:14:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:10.175 [ 00:18:10.175 { 00:18:10.175 "name": "BaseBdev2", 00:18:10.175 "aliases": [ 00:18:10.175 "a83d77bd-08e8-4c0b-80eb-36b0afe48d6e" 00:18:10.175 ], 00:18:10.175 "product_name": "Malloc disk", 00:18:10.175 "block_size": 512, 00:18:10.175 "num_blocks": 65536, 00:18:10.175 "uuid": "a83d77bd-08e8-4c0b-80eb-36b0afe48d6e", 00:18:10.175 "assigned_rate_limits": { 00:18:10.175 "rw_ios_per_sec": 0, 00:18:10.175 "rw_mbytes_per_sec": 0, 00:18:10.175 "r_mbytes_per_sec": 0, 00:18:10.175 "w_mbytes_per_sec": 0 00:18:10.175 }, 00:18:10.175 "claimed": true, 00:18:10.175 "claim_type": "exclusive_write", 00:18:10.175 "zoned": false, 00:18:10.175 "supported_io_types": { 00:18:10.175 "read": true, 00:18:10.175 "write": true, 00:18:10.175 "unmap": true, 00:18:10.175 "write_zeroes": true, 00:18:10.175 "flush": true, 00:18:10.175 "reset": true, 00:18:10.175 "compare": false, 00:18:10.175 "compare_and_write": false, 00:18:10.175 "abort": true, 00:18:10.175 "nvme_admin": false, 00:18:10.175 "nvme_io": false 00:18:10.175 }, 00:18:10.175 "memory_domains": [ 00:18:10.175 { 00:18:10.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.175 "dma_device_type": 2 00:18:10.175 } 00:18:10.175 ], 00:18:10.175 "driver_specific": {} 00:18:10.175 } 00:18:10.175 ] 00:18:10.175 21:14:32 -- common/autotest_common.sh@895 -- # return 0 00:18:10.175 21:14:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:10.175 21:14:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:10.175 21:14:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:10.175 21:14:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:10.175 21:14:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:10.175 21:14:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:10.175 21:14:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:10.175 21:14:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:10.175 21:14:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:10.175 21:14:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:10.175 21:14:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:10.175 21:14:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:10.175 21:14:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.175 21:14:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.434 21:14:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.434 "name": "Existed_Raid", 00:18:10.434 "uuid": "496b72e2-c8eb-4844-8c73-6e522cbad3e3", 00:18:10.434 "strip_size_kb": 0, 00:18:10.434 "state": "configuring", 00:18:10.434 "raid_level": "raid1", 00:18:10.434 "superblock": true, 00:18:10.434 "num_base_bdevs": 4, 00:18:10.434 "num_base_bdevs_discovered": 2, 00:18:10.434 "num_base_bdevs_operational": 4, 00:18:10.434 "base_bdevs_list": [ 00:18:10.434 { 00:18:10.434 "name": "BaseBdev1", 00:18:10.434 "uuid": "0a6cc779-761b-4694-a67b-36352a5ed6c4", 00:18:10.434 "is_configured": true, 00:18:10.434 "data_offset": 2048, 00:18:10.434 "data_size": 63488 00:18:10.434 }, 00:18:10.434 { 00:18:10.434 "name": "BaseBdev2", 00:18:10.434 "uuid": "a83d77bd-08e8-4c0b-80eb-36b0afe48d6e", 00:18:10.434 "is_configured": true, 00:18:10.434 "data_offset": 2048, 00:18:10.434 "data_size": 63488 00:18:10.434 }, 00:18:10.434 { 00:18:10.434 "name": "BaseBdev3", 00:18:10.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.434 "is_configured": false, 00:18:10.434 "data_offset": 0, 00:18:10.434 "data_size": 0 00:18:10.434 }, 00:18:10.434 { 00:18:10.434 "name": "BaseBdev4", 00:18:10.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.434 "is_configured": false, 00:18:10.434 "data_offset": 0, 00:18:10.434 "data_size": 0 00:18:10.434 } 00:18:10.434 ] 00:18:10.434 }' 00:18:10.434 21:14:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.434 21:14:32 -- common/autotest_common.sh@10 -- # set +x 00:18:10.999 21:14:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:11.257 [2024-06-07 21:14:33.861814] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:11.257 BaseBdev3 00:18:11.257 21:14:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:11.257 21:14:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:11.257 21:14:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:11.257 21:14:33 -- common/autotest_common.sh@889 -- # local i 00:18:11.257 21:14:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:11.257 21:14:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:11.257 21:14:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:11.515 21:14:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:11.773 [ 00:18:11.774 { 00:18:11.774 "name": "BaseBdev3", 00:18:11.774 "aliases": [ 00:18:11.774 "a7555ffb-d8ed-4f12-a7c9-be96f511f54f" 00:18:11.774 ], 00:18:11.774 "product_name": "Malloc disk", 00:18:11.774 "block_size": 512, 00:18:11.774 "num_blocks": 65536, 00:18:11.774 "uuid": "a7555ffb-d8ed-4f12-a7c9-be96f511f54f", 00:18:11.774 "assigned_rate_limits": { 00:18:11.774 "rw_ios_per_sec": 0, 00:18:11.774 "rw_mbytes_per_sec": 0, 00:18:11.774 "r_mbytes_per_sec": 0, 00:18:11.774 "w_mbytes_per_sec": 0 00:18:11.774 }, 00:18:11.774 "claimed": true, 00:18:11.774 "claim_type": "exclusive_write", 00:18:11.774 "zoned": false, 00:18:11.774 "supported_io_types": { 00:18:11.774 "read": true, 00:18:11.774 "write": true, 00:18:11.774 "unmap": true, 00:18:11.774 "write_zeroes": true, 00:18:11.774 "flush": true, 00:18:11.774 "reset": true, 00:18:11.774 "compare": false, 00:18:11.774 "compare_and_write": false, 00:18:11.774 "abort": true, 00:18:11.774 "nvme_admin": false, 00:18:11.774 "nvme_io": false 00:18:11.774 }, 00:18:11.774 "memory_domains": [ 00:18:11.774 { 00:18:11.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.774 "dma_device_type": 2 00:18:11.774 } 00:18:11.774 ], 00:18:11.774 "driver_specific": {} 00:18:11.774 } 00:18:11.774 ] 00:18:11.774 21:14:34 -- common/autotest_common.sh@895 -- # return 0 00:18:11.774 21:14:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:11.774 21:14:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:11.774 21:14:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:11.774 21:14:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:11.774 21:14:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:11.774 21:14:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:11.774 21:14:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:11.774 21:14:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:11.774 21:14:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:11.774 21:14:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:11.774 21:14:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:11.774 21:14:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:11.774 21:14:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.774 21:14:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.032 21:14:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.032 "name": "Existed_Raid", 00:18:12.032 "uuid": "496b72e2-c8eb-4844-8c73-6e522cbad3e3", 00:18:12.032 "strip_size_kb": 0, 00:18:12.032 "state": "configuring", 00:18:12.032 "raid_level": "raid1", 00:18:12.032 "superblock": true, 00:18:12.032 "num_base_bdevs": 4, 00:18:12.032 "num_base_bdevs_discovered": 3, 00:18:12.032 "num_base_bdevs_operational": 4, 00:18:12.032 "base_bdevs_list": [ 00:18:12.032 { 00:18:12.032 "name": "BaseBdev1", 00:18:12.032 "uuid": "0a6cc779-761b-4694-a67b-36352a5ed6c4", 00:18:12.032 "is_configured": true, 00:18:12.032 "data_offset": 2048, 00:18:12.032 "data_size": 63488 00:18:12.032 }, 00:18:12.032 { 00:18:12.032 "name": "BaseBdev2", 00:18:12.032 "uuid": "a83d77bd-08e8-4c0b-80eb-36b0afe48d6e", 00:18:12.032 "is_configured": true, 00:18:12.032 "data_offset": 2048, 00:18:12.032 "data_size": 63488 00:18:12.032 }, 00:18:12.032 { 00:18:12.032 "name": "BaseBdev3", 00:18:12.032 "uuid": "a7555ffb-d8ed-4f12-a7c9-be96f511f54f", 00:18:12.032 "is_configured": true, 00:18:12.032 "data_offset": 2048, 00:18:12.032 "data_size": 63488 00:18:12.032 }, 00:18:12.032 { 00:18:12.032 "name": "BaseBdev4", 00:18:12.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.032 "is_configured": false, 00:18:12.032 "data_offset": 0, 00:18:12.032 "data_size": 0 00:18:12.032 } 00:18:12.032 ] 00:18:12.032 }' 00:18:12.032 21:14:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.032 21:14:34 -- common/autotest_common.sh@10 -- # set +x 00:18:12.598 21:14:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:12.856 [2024-06-07 21:14:35.446861] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:12.856 [2024-06-07 21:14:35.447087] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:18:12.856 [2024-06-07 21:14:35.447101] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:12.856 BaseBdev4 00:18:12.856 [2024-06-07 21:14:35.447333] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:12.856 [2024-06-07 21:14:35.447752] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:18:12.856 [2024-06-07 21:14:35.447776] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:18:12.856 [2024-06-07 21:14:35.447958] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.856 21:14:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:12.856 21:14:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:12.856 21:14:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:12.856 21:14:35 -- common/autotest_common.sh@889 -- # local i 00:18:12.856 21:14:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:12.856 21:14:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:12.856 21:14:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:13.116 21:14:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:13.375 [ 00:18:13.375 { 00:18:13.375 "name": "BaseBdev4", 00:18:13.375 "aliases": [ 00:18:13.375 "b038206e-eb02-47bb-8549-bec5240a3375" 00:18:13.375 ], 00:18:13.375 "product_name": "Malloc disk", 00:18:13.375 "block_size": 512, 00:18:13.375 "num_blocks": 65536, 00:18:13.375 "uuid": "b038206e-eb02-47bb-8549-bec5240a3375", 00:18:13.375 "assigned_rate_limits": { 00:18:13.375 "rw_ios_per_sec": 0, 00:18:13.375 "rw_mbytes_per_sec": 0, 00:18:13.375 "r_mbytes_per_sec": 0, 00:18:13.375 "w_mbytes_per_sec": 0 00:18:13.375 }, 00:18:13.375 "claimed": true, 00:18:13.375 "claim_type": "exclusive_write", 00:18:13.375 "zoned": false, 00:18:13.375 "supported_io_types": { 00:18:13.375 "read": true, 00:18:13.375 "write": true, 00:18:13.375 "unmap": true, 00:18:13.375 "write_zeroes": true, 00:18:13.375 "flush": true, 00:18:13.375 "reset": true, 00:18:13.376 "compare": false, 00:18:13.376 "compare_and_write": false, 00:18:13.376 "abort": true, 00:18:13.376 "nvme_admin": false, 00:18:13.376 "nvme_io": false 00:18:13.376 }, 00:18:13.376 "memory_domains": [ 00:18:13.376 { 00:18:13.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.376 "dma_device_type": 2 00:18:13.376 } 00:18:13.376 ], 00:18:13.376 "driver_specific": {} 00:18:13.376 } 00:18:13.376 ] 00:18:13.376 21:14:35 -- common/autotest_common.sh@895 -- # return 0 00:18:13.376 21:14:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:13.376 21:14:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:13.376 21:14:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:13.376 21:14:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:13.376 21:14:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:13.376 21:14:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:13.376 21:14:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:13.376 21:14:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:13.376 21:14:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:13.376 21:14:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:13.376 21:14:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:13.376 21:14:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:13.376 21:14:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.376 21:14:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.634 21:14:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:13.634 "name": "Existed_Raid", 00:18:13.634 "uuid": "496b72e2-c8eb-4844-8c73-6e522cbad3e3", 00:18:13.634 "strip_size_kb": 0, 00:18:13.634 "state": "online", 00:18:13.634 "raid_level": "raid1", 00:18:13.634 "superblock": true, 00:18:13.634 "num_base_bdevs": 4, 00:18:13.634 "num_base_bdevs_discovered": 4, 00:18:13.634 "num_base_bdevs_operational": 4, 00:18:13.634 "base_bdevs_list": [ 00:18:13.634 { 00:18:13.634 "name": "BaseBdev1", 00:18:13.634 "uuid": "0a6cc779-761b-4694-a67b-36352a5ed6c4", 00:18:13.634 "is_configured": true, 00:18:13.634 "data_offset": 2048, 00:18:13.634 "data_size": 63488 00:18:13.634 }, 00:18:13.634 { 00:18:13.634 "name": "BaseBdev2", 00:18:13.634 "uuid": "a83d77bd-08e8-4c0b-80eb-36b0afe48d6e", 00:18:13.634 "is_configured": true, 00:18:13.634 "data_offset": 2048, 00:18:13.634 "data_size": 63488 00:18:13.634 }, 00:18:13.634 { 00:18:13.635 "name": "BaseBdev3", 00:18:13.635 "uuid": "a7555ffb-d8ed-4f12-a7c9-be96f511f54f", 00:18:13.635 "is_configured": true, 00:18:13.635 "data_offset": 2048, 00:18:13.635 "data_size": 63488 00:18:13.635 }, 00:18:13.635 { 00:18:13.635 "name": "BaseBdev4", 00:18:13.635 "uuid": "b038206e-eb02-47bb-8549-bec5240a3375", 00:18:13.635 "is_configured": true, 00:18:13.635 "data_offset": 2048, 00:18:13.635 "data_size": 63488 00:18:13.635 } 00:18:13.635 ] 00:18:13.635 }' 00:18:13.635 21:14:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:13.635 21:14:36 -- common/autotest_common.sh@10 -- # set +x 00:18:14.202 21:14:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:14.460 [2024-06-07 21:14:37.087440] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.460 21:14:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.718 21:14:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.718 "name": "Existed_Raid", 00:18:14.718 "uuid": "496b72e2-c8eb-4844-8c73-6e522cbad3e3", 00:18:14.718 "strip_size_kb": 0, 00:18:14.718 "state": "online", 00:18:14.718 "raid_level": "raid1", 00:18:14.718 "superblock": true, 00:18:14.718 "num_base_bdevs": 4, 00:18:14.718 "num_base_bdevs_discovered": 3, 00:18:14.718 "num_base_bdevs_operational": 3, 00:18:14.718 "base_bdevs_list": [ 00:18:14.718 { 00:18:14.718 "name": null, 00:18:14.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.718 "is_configured": false, 00:18:14.718 "data_offset": 2048, 00:18:14.718 "data_size": 63488 00:18:14.718 }, 00:18:14.718 { 00:18:14.718 "name": "BaseBdev2", 00:18:14.718 "uuid": "a83d77bd-08e8-4c0b-80eb-36b0afe48d6e", 00:18:14.718 "is_configured": true, 00:18:14.718 "data_offset": 2048, 00:18:14.718 "data_size": 63488 00:18:14.718 }, 00:18:14.718 { 00:18:14.718 "name": "BaseBdev3", 00:18:14.718 "uuid": "a7555ffb-d8ed-4f12-a7c9-be96f511f54f", 00:18:14.719 "is_configured": true, 00:18:14.719 "data_offset": 2048, 00:18:14.719 "data_size": 63488 00:18:14.719 }, 00:18:14.719 { 00:18:14.719 "name": "BaseBdev4", 00:18:14.719 "uuid": "b038206e-eb02-47bb-8549-bec5240a3375", 00:18:14.719 "is_configured": true, 00:18:14.719 "data_offset": 2048, 00:18:14.719 "data_size": 63488 00:18:14.719 } 00:18:14.719 ] 00:18:14.719 }' 00:18:14.719 21:14:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.719 21:14:37 -- common/autotest_common.sh@10 -- # set +x 00:18:15.653 21:14:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:15.653 21:14:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:15.653 21:14:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.653 21:14:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:15.653 21:14:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:15.653 21:14:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:15.653 21:14:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:15.912 [2024-06-07 21:14:38.509858] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:15.912 21:14:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:15.912 21:14:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:15.912 21:14:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.912 21:14:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:16.180 21:14:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:16.180 21:14:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.180 21:14:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:16.453 [2024-06-07 21:14:38.992742] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:16.453 21:14:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:16.453 21:14:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:16.453 21:14:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.453 21:14:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:16.712 21:14:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:16.712 21:14:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.712 21:14:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:16.970 [2024-06-07 21:14:39.451268] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:16.970 [2024-06-07 21:14:39.451304] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.970 [2024-06-07 21:14:39.451397] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.970 [2024-06-07 21:14:39.462252] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.970 [2024-06-07 21:14:39.462284] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:18:16.970 21:14:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:16.970 21:14:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:16.970 21:14:39 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.970 21:14:39 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:17.228 21:14:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:17.228 21:14:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:17.228 21:14:39 -- bdev/bdev_raid.sh@287 -- # killprocess 134725 00:18:17.228 21:14:39 -- common/autotest_common.sh@926 -- # '[' -z 134725 ']' 00:18:17.228 21:14:39 -- common/autotest_common.sh@930 -- # kill -0 134725 00:18:17.228 21:14:39 -- common/autotest_common.sh@931 -- # uname 00:18:17.228 21:14:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:17.228 21:14:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134725 00:18:17.228 killing process with pid 134725 00:18:17.228 21:14:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:17.228 21:14:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:17.228 21:14:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134725' 00:18:17.228 21:14:39 -- common/autotest_common.sh@945 -- # kill 134725 00:18:17.228 21:14:39 -- common/autotest_common.sh@950 -- # wait 134725 00:18:17.228 [2024-06-07 21:14:39.711319] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.228 [2024-06-07 21:14:39.711410] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.486 ************************************ 00:18:17.486 END TEST raid_state_function_test_sb 00:18:17.486 ************************************ 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:17.486 00:18:17.486 real 0m14.288s 00:18:17.486 user 0m26.886s 00:18:17.486 sys 0m1.545s 00:18:17.486 21:14:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:17.486 21:14:39 -- common/autotest_common.sh@10 -- # set +x 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:18:17.486 21:14:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:17.486 21:14:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:17.486 21:14:39 -- common/autotest_common.sh@10 -- # set +x 00:18:17.486 ************************************ 00:18:17.486 START TEST raid_superblock_test 00:18:17.486 ************************************ 00:18:17.486 21:14:39 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@357 -- # raid_pid=135187 00:18:17.486 21:14:39 -- bdev/bdev_raid.sh@358 -- # waitforlisten 135187 /var/tmp/spdk-raid.sock 00:18:17.486 21:14:39 -- common/autotest_common.sh@819 -- # '[' -z 135187 ']' 00:18:17.486 21:14:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:17.486 21:14:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:17.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:17.486 21:14:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:17.486 21:14:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:17.486 21:14:39 -- common/autotest_common.sh@10 -- # set +x 00:18:17.487 21:14:39 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:17.487 [2024-06-07 21:14:40.056614] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:17.487 [2024-06-07 21:14:40.057074] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135187 ] 00:18:17.745 [2024-06-07 21:14:40.223146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.745 [2024-06-07 21:14:40.298701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.745 [2024-06-07 21:14:40.354937] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.312 21:14:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:18.312 21:14:40 -- common/autotest_common.sh@852 -- # return 0 00:18:18.312 21:14:40 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:18.312 21:14:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:18.312 21:14:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:18.312 21:14:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:18.312 21:14:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:18.312 21:14:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:18.312 21:14:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:18.312 21:14:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:18.312 21:14:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:18.571 malloc1 00:18:18.571 21:14:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:18.830 [2024-06-07 21:14:41.427148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:18.830 [2024-06-07 21:14:41.427278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.830 [2024-06-07 21:14:41.427321] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:18.830 [2024-06-07 21:14:41.427370] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.830 [2024-06-07 21:14:41.429668] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.830 [2024-06-07 21:14:41.429729] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:18.830 pt1 00:18:18.830 21:14:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:18.830 21:14:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:18.830 21:14:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:18.830 21:14:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:18.830 21:14:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:18.830 21:14:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:18.830 21:14:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:18.830 21:14:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:18.830 21:14:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:19.088 malloc2 00:18:19.088 21:14:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:19.347 [2024-06-07 21:14:41.845492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:19.347 [2024-06-07 21:14:41.845609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.347 [2024-06-07 21:14:41.845656] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:19.347 [2024-06-07 21:14:41.845711] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.347 [2024-06-07 21:14:41.848143] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.347 [2024-06-07 21:14:41.848207] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:19.347 pt2 00:18:19.347 21:14:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:19.347 21:14:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:19.347 21:14:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:19.347 21:14:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:19.347 21:14:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:19.347 21:14:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:19.347 21:14:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:19.347 21:14:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:19.347 21:14:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:19.605 malloc3 00:18:19.605 21:14:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:19.863 [2024-06-07 21:14:42.351793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:19.863 [2024-06-07 21:14:42.351914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.863 [2024-06-07 21:14:42.351956] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:19.863 [2024-06-07 21:14:42.352000] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.864 [2024-06-07 21:14:42.354231] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.864 [2024-06-07 21:14:42.354313] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:19.864 pt3 00:18:19.864 21:14:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:19.864 21:14:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:19.864 21:14:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:19.864 21:14:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:19.864 21:14:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:19.864 21:14:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:19.864 21:14:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:19.864 21:14:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:19.864 21:14:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:20.122 malloc4 00:18:20.122 21:14:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:20.381 [2024-06-07 21:14:42.814297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:20.381 [2024-06-07 21:14:42.814411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.381 [2024-06-07 21:14:42.814453] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:20.381 [2024-06-07 21:14:42.814494] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.381 [2024-06-07 21:14:42.816733] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.381 [2024-06-07 21:14:42.816799] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:20.381 pt4 00:18:20.381 21:14:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:20.381 21:14:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:20.381 21:14:42 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:20.381 [2024-06-07 21:14:43.022453] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:20.381 [2024-06-07 21:14:43.024449] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.381 [2024-06-07 21:14:43.024541] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:20.381 [2024-06-07 21:14:43.024604] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:20.381 [2024-06-07 21:14:43.024917] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:18:20.381 [2024-06-07 21:14:43.024942] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:20.381 [2024-06-07 21:14:43.025107] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:20.381 [2024-06-07 21:14:43.025554] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:18:20.381 [2024-06-07 21:14:43.025578] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:18:20.381 [2024-06-07 21:14:43.025791] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.381 21:14:43 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:20.381 21:14:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:20.381 21:14:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:20.381 21:14:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:20.381 21:14:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:20.381 21:14:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:20.381 21:14:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.381 21:14:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.381 21:14:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.381 21:14:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.381 21:14:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.381 21:14:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.639 21:14:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:20.639 "name": "raid_bdev1", 00:18:20.639 "uuid": "79b1d350-2ebc-4fdc-a98e-1f701514c2c2", 00:18:20.639 "strip_size_kb": 0, 00:18:20.639 "state": "online", 00:18:20.639 "raid_level": "raid1", 00:18:20.639 "superblock": true, 00:18:20.639 "num_base_bdevs": 4, 00:18:20.639 "num_base_bdevs_discovered": 4, 00:18:20.639 "num_base_bdevs_operational": 4, 00:18:20.639 "base_bdevs_list": [ 00:18:20.639 { 00:18:20.639 "name": "pt1", 00:18:20.639 "uuid": "8be95550-7fe0-565a-8360-bc3318ac2801", 00:18:20.639 "is_configured": true, 00:18:20.639 "data_offset": 2048, 00:18:20.639 "data_size": 63488 00:18:20.639 }, 00:18:20.639 { 00:18:20.639 "name": "pt2", 00:18:20.639 "uuid": "ecf636dc-9f01-5a46-be30-d5c42f2b6b01", 00:18:20.639 "is_configured": true, 00:18:20.639 "data_offset": 2048, 00:18:20.639 "data_size": 63488 00:18:20.639 }, 00:18:20.639 { 00:18:20.639 "name": "pt3", 00:18:20.639 "uuid": "03ed0b71-faf4-54a1-8533-521e84943a20", 00:18:20.639 "is_configured": true, 00:18:20.639 "data_offset": 2048, 00:18:20.639 "data_size": 63488 00:18:20.639 }, 00:18:20.639 { 00:18:20.639 "name": "pt4", 00:18:20.639 "uuid": "29febac5-cef6-5fcd-8f23-13f420e62df6", 00:18:20.639 "is_configured": true, 00:18:20.639 "data_offset": 2048, 00:18:20.639 "data_size": 63488 00:18:20.639 } 00:18:20.639 ] 00:18:20.639 }' 00:18:20.639 21:14:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:20.639 21:14:43 -- common/autotest_common.sh@10 -- # set +x 00:18:21.574 21:14:43 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:21.574 21:14:43 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:21.574 [2024-06-07 21:14:44.142842] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.574 21:14:44 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=79b1d350-2ebc-4fdc-a98e-1f701514c2c2 00:18:21.574 21:14:44 -- bdev/bdev_raid.sh@380 -- # '[' -z 79b1d350-2ebc-4fdc-a98e-1f701514c2c2 ']' 00:18:21.574 21:14:44 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:21.832 [2024-06-07 21:14:44.338618] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.832 [2024-06-07 21:14:44.338643] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.832 [2024-06-07 21:14:44.338772] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.832 [2024-06-07 21:14:44.338879] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.832 [2024-06-07 21:14:44.338923] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:18:21.832 21:14:44 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.832 21:14:44 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:22.091 21:14:44 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:22.091 21:14:44 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:22.091 21:14:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:22.091 21:14:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:22.349 21:14:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:22.349 21:14:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:22.607 21:14:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:22.607 21:14:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:22.865 21:14:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:22.865 21:14:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:23.124 21:14:45 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:23.124 21:14:45 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:23.382 21:14:45 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:23.382 21:14:45 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:23.382 21:14:45 -- common/autotest_common.sh@640 -- # local es=0 00:18:23.382 21:14:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:23.382 21:14:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:23.382 21:14:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:23.382 21:14:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:23.383 21:14:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:23.383 21:14:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:23.383 21:14:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:23.383 21:14:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:23.383 21:14:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:23.383 21:14:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:23.383 [2024-06-07 21:14:46.027231] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:23.383 [2024-06-07 21:14:46.029168] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:23.383 [2024-06-07 21:14:46.029244] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:23.383 [2024-06-07 21:14:46.029282] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:23.383 [2024-06-07 21:14:46.029331] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:23.383 [2024-06-07 21:14:46.029439] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:23.383 [2024-06-07 21:14:46.029489] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:23.383 [2024-06-07 21:14:46.029544] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:23.383 [2024-06-07 21:14:46.029568] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.383 [2024-06-07 21:14:46.029578] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:18:23.383 request: 00:18:23.383 { 00:18:23.383 "name": "raid_bdev1", 00:18:23.383 "raid_level": "raid1", 00:18:23.383 "base_bdevs": [ 00:18:23.383 "malloc1", 00:18:23.383 "malloc2", 00:18:23.383 "malloc3", 00:18:23.383 "malloc4" 00:18:23.383 ], 00:18:23.383 "superblock": false, 00:18:23.383 "method": "bdev_raid_create", 00:18:23.383 "req_id": 1 00:18:23.383 } 00:18:23.383 Got JSON-RPC error response 00:18:23.383 response: 00:18:23.383 { 00:18:23.383 "code": -17, 00:18:23.383 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:23.383 } 00:18:23.383 21:14:46 -- common/autotest_common.sh@643 -- # es=1 00:18:23.383 21:14:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:23.383 21:14:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:23.383 21:14:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:23.383 21:14:46 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.383 21:14:46 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:23.652 21:14:46 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:23.652 21:14:46 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:23.652 21:14:46 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:23.922 [2024-06-07 21:14:46.483281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:23.922 [2024-06-07 21:14:46.483403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.923 [2024-06-07 21:14:46.483440] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:23.923 [2024-06-07 21:14:46.483468] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.923 [2024-06-07 21:14:46.486114] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.923 [2024-06-07 21:14:46.486214] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:23.923 [2024-06-07 21:14:46.486324] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:23.923 [2024-06-07 21:14:46.486400] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:23.923 pt1 00:18:23.923 21:14:46 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:23.923 21:14:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:23.923 21:14:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:23.923 21:14:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:23.923 21:14:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:23.923 21:14:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:23.923 21:14:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:23.923 21:14:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:23.923 21:14:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:23.923 21:14:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:23.923 21:14:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.923 21:14:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.181 21:14:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:24.181 "name": "raid_bdev1", 00:18:24.181 "uuid": "79b1d350-2ebc-4fdc-a98e-1f701514c2c2", 00:18:24.181 "strip_size_kb": 0, 00:18:24.181 "state": "configuring", 00:18:24.181 "raid_level": "raid1", 00:18:24.181 "superblock": true, 00:18:24.181 "num_base_bdevs": 4, 00:18:24.181 "num_base_bdevs_discovered": 1, 00:18:24.181 "num_base_bdevs_operational": 4, 00:18:24.181 "base_bdevs_list": [ 00:18:24.181 { 00:18:24.181 "name": "pt1", 00:18:24.181 "uuid": "8be95550-7fe0-565a-8360-bc3318ac2801", 00:18:24.181 "is_configured": true, 00:18:24.181 "data_offset": 2048, 00:18:24.181 "data_size": 63488 00:18:24.181 }, 00:18:24.181 { 00:18:24.181 "name": null, 00:18:24.181 "uuid": "ecf636dc-9f01-5a46-be30-d5c42f2b6b01", 00:18:24.181 "is_configured": false, 00:18:24.181 "data_offset": 2048, 00:18:24.181 "data_size": 63488 00:18:24.181 }, 00:18:24.181 { 00:18:24.181 "name": null, 00:18:24.181 "uuid": "03ed0b71-faf4-54a1-8533-521e84943a20", 00:18:24.181 "is_configured": false, 00:18:24.181 "data_offset": 2048, 00:18:24.181 "data_size": 63488 00:18:24.181 }, 00:18:24.181 { 00:18:24.181 "name": null, 00:18:24.181 "uuid": "29febac5-cef6-5fcd-8f23-13f420e62df6", 00:18:24.181 "is_configured": false, 00:18:24.181 "data_offset": 2048, 00:18:24.181 "data_size": 63488 00:18:24.181 } 00:18:24.181 ] 00:18:24.181 }' 00:18:24.181 21:14:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:24.181 21:14:46 -- common/autotest_common.sh@10 -- # set +x 00:18:24.747 21:14:47 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:24.747 21:14:47 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:25.005 [2024-06-07 21:14:47.491552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:25.005 [2024-06-07 21:14:47.491665] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.005 [2024-06-07 21:14:47.491704] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:25.005 [2024-06-07 21:14:47.491741] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.005 [2024-06-07 21:14:47.492237] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.005 [2024-06-07 21:14:47.492304] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:25.005 [2024-06-07 21:14:47.492447] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:25.005 [2024-06-07 21:14:47.492499] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:25.005 pt2 00:18:25.005 21:14:47 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:25.263 [2024-06-07 21:14:47.743643] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:25.263 21:14:47 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:25.263 21:14:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:25.263 21:14:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:25.263 21:14:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:25.263 21:14:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:25.263 21:14:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:25.263 21:14:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:25.263 21:14:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:25.263 21:14:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:25.263 21:14:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:25.263 21:14:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.263 21:14:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.521 21:14:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:25.521 "name": "raid_bdev1", 00:18:25.521 "uuid": "79b1d350-2ebc-4fdc-a98e-1f701514c2c2", 00:18:25.521 "strip_size_kb": 0, 00:18:25.521 "state": "configuring", 00:18:25.521 "raid_level": "raid1", 00:18:25.521 "superblock": true, 00:18:25.521 "num_base_bdevs": 4, 00:18:25.521 "num_base_bdevs_discovered": 1, 00:18:25.521 "num_base_bdevs_operational": 4, 00:18:25.521 "base_bdevs_list": [ 00:18:25.521 { 00:18:25.521 "name": "pt1", 00:18:25.521 "uuid": "8be95550-7fe0-565a-8360-bc3318ac2801", 00:18:25.521 "is_configured": true, 00:18:25.521 "data_offset": 2048, 00:18:25.521 "data_size": 63488 00:18:25.521 }, 00:18:25.521 { 00:18:25.521 "name": null, 00:18:25.521 "uuid": "ecf636dc-9f01-5a46-be30-d5c42f2b6b01", 00:18:25.521 "is_configured": false, 00:18:25.521 "data_offset": 2048, 00:18:25.521 "data_size": 63488 00:18:25.521 }, 00:18:25.521 { 00:18:25.521 "name": null, 00:18:25.521 "uuid": "03ed0b71-faf4-54a1-8533-521e84943a20", 00:18:25.521 "is_configured": false, 00:18:25.521 "data_offset": 2048, 00:18:25.521 "data_size": 63488 00:18:25.521 }, 00:18:25.521 { 00:18:25.521 "name": null, 00:18:25.522 "uuid": "29febac5-cef6-5fcd-8f23-13f420e62df6", 00:18:25.522 "is_configured": false, 00:18:25.522 "data_offset": 2048, 00:18:25.522 "data_size": 63488 00:18:25.522 } 00:18:25.522 ] 00:18:25.522 }' 00:18:25.522 21:14:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:25.522 21:14:48 -- common/autotest_common.sh@10 -- # set +x 00:18:26.088 21:14:48 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:26.088 21:14:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:26.088 21:14:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:26.346 [2024-06-07 21:14:48.931974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:26.346 [2024-06-07 21:14:48.932096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.346 [2024-06-07 21:14:48.932138] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:26.346 [2024-06-07 21:14:48.932167] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.346 [2024-06-07 21:14:48.932661] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.346 [2024-06-07 21:14:48.932745] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:26.346 [2024-06-07 21:14:48.932843] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:26.346 [2024-06-07 21:14:48.932872] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:26.346 pt2 00:18:26.346 21:14:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:26.346 21:14:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:26.346 21:14:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:26.604 [2024-06-07 21:14:49.168009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:26.604 [2024-06-07 21:14:49.168122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.604 [2024-06-07 21:14:49.168159] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:26.604 [2024-06-07 21:14:49.168185] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.604 [2024-06-07 21:14:49.168667] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.604 [2024-06-07 21:14:49.168751] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:26.604 [2024-06-07 21:14:49.168858] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:26.604 [2024-06-07 21:14:49.168884] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:26.604 pt3 00:18:26.605 21:14:49 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:26.605 21:14:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:26.605 21:14:49 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:26.863 [2024-06-07 21:14:49.376067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:26.863 [2024-06-07 21:14:49.376171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.863 [2024-06-07 21:14:49.376203] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:26.863 [2024-06-07 21:14:49.376228] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.863 [2024-06-07 21:14:49.376680] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.863 [2024-06-07 21:14:49.376741] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:26.863 [2024-06-07 21:14:49.376869] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:26.863 [2024-06-07 21:14:49.376897] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:26.863 [2024-06-07 21:14:49.377066] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:18:26.863 [2024-06-07 21:14:49.377090] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:26.863 [2024-06-07 21:14:49.377179] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:26.863 [2024-06-07 21:14:49.377521] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:18:26.863 [2024-06-07 21:14:49.377543] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:18:26.863 [2024-06-07 21:14:49.377649] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.863 pt4 00:18:26.863 21:14:49 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:26.863 21:14:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:26.863 21:14:49 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:26.863 21:14:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:26.863 21:14:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:26.863 21:14:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:26.863 21:14:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:26.863 21:14:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:26.863 21:14:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.863 21:14:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:26.863 21:14:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:26.863 21:14:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:26.863 21:14:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.863 21:14:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.121 21:14:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.121 "name": "raid_bdev1", 00:18:27.121 "uuid": "79b1d350-2ebc-4fdc-a98e-1f701514c2c2", 00:18:27.121 "strip_size_kb": 0, 00:18:27.121 "state": "online", 00:18:27.121 "raid_level": "raid1", 00:18:27.121 "superblock": true, 00:18:27.121 "num_base_bdevs": 4, 00:18:27.121 "num_base_bdevs_discovered": 4, 00:18:27.121 "num_base_bdevs_operational": 4, 00:18:27.121 "base_bdevs_list": [ 00:18:27.121 { 00:18:27.121 "name": "pt1", 00:18:27.121 "uuid": "8be95550-7fe0-565a-8360-bc3318ac2801", 00:18:27.121 "is_configured": true, 00:18:27.121 "data_offset": 2048, 00:18:27.121 "data_size": 63488 00:18:27.121 }, 00:18:27.121 { 00:18:27.121 "name": "pt2", 00:18:27.121 "uuid": "ecf636dc-9f01-5a46-be30-d5c42f2b6b01", 00:18:27.121 "is_configured": true, 00:18:27.121 "data_offset": 2048, 00:18:27.121 "data_size": 63488 00:18:27.121 }, 00:18:27.121 { 00:18:27.121 "name": "pt3", 00:18:27.121 "uuid": "03ed0b71-faf4-54a1-8533-521e84943a20", 00:18:27.121 "is_configured": true, 00:18:27.121 "data_offset": 2048, 00:18:27.121 "data_size": 63488 00:18:27.121 }, 00:18:27.121 { 00:18:27.121 "name": "pt4", 00:18:27.121 "uuid": "29febac5-cef6-5fcd-8f23-13f420e62df6", 00:18:27.121 "is_configured": true, 00:18:27.121 "data_offset": 2048, 00:18:27.121 "data_size": 63488 00:18:27.121 } 00:18:27.121 ] 00:18:27.121 }' 00:18:27.121 21:14:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.121 21:14:49 -- common/autotest_common.sh@10 -- # set +x 00:18:27.687 21:14:50 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:27.687 21:14:50 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:27.945 [2024-06-07 21:14:50.496604] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.945 21:14:50 -- bdev/bdev_raid.sh@430 -- # '[' 79b1d350-2ebc-4fdc-a98e-1f701514c2c2 '!=' 79b1d350-2ebc-4fdc-a98e-1f701514c2c2 ']' 00:18:27.945 21:14:50 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:18:27.945 21:14:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:27.945 21:14:50 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:27.945 21:14:50 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:28.204 [2024-06-07 21:14:50.688379] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:28.204 21:14:50 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:28.204 21:14:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:28.204 21:14:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:28.204 21:14:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:28.204 21:14:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:28.204 21:14:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:28.204 21:14:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:28.204 21:14:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:28.204 21:14:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:28.204 21:14:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:28.204 21:14:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.204 21:14:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.462 21:14:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:28.462 "name": "raid_bdev1", 00:18:28.462 "uuid": "79b1d350-2ebc-4fdc-a98e-1f701514c2c2", 00:18:28.462 "strip_size_kb": 0, 00:18:28.462 "state": "online", 00:18:28.462 "raid_level": "raid1", 00:18:28.462 "superblock": true, 00:18:28.462 "num_base_bdevs": 4, 00:18:28.462 "num_base_bdevs_discovered": 3, 00:18:28.462 "num_base_bdevs_operational": 3, 00:18:28.462 "base_bdevs_list": [ 00:18:28.462 { 00:18:28.462 "name": null, 00:18:28.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.462 "is_configured": false, 00:18:28.462 "data_offset": 2048, 00:18:28.462 "data_size": 63488 00:18:28.462 }, 00:18:28.462 { 00:18:28.462 "name": "pt2", 00:18:28.462 "uuid": "ecf636dc-9f01-5a46-be30-d5c42f2b6b01", 00:18:28.462 "is_configured": true, 00:18:28.462 "data_offset": 2048, 00:18:28.462 "data_size": 63488 00:18:28.462 }, 00:18:28.462 { 00:18:28.462 "name": "pt3", 00:18:28.462 "uuid": "03ed0b71-faf4-54a1-8533-521e84943a20", 00:18:28.462 "is_configured": true, 00:18:28.462 "data_offset": 2048, 00:18:28.462 "data_size": 63488 00:18:28.462 }, 00:18:28.462 { 00:18:28.462 "name": "pt4", 00:18:28.462 "uuid": "29febac5-cef6-5fcd-8f23-13f420e62df6", 00:18:28.462 "is_configured": true, 00:18:28.462 "data_offset": 2048, 00:18:28.462 "data_size": 63488 00:18:28.462 } 00:18:28.462 ] 00:18:28.462 }' 00:18:28.462 21:14:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:28.462 21:14:50 -- common/autotest_common.sh@10 -- # set +x 00:18:29.028 21:14:51 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:29.286 [2024-06-07 21:14:51.804650] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:29.286 [2024-06-07 21:14:51.804688] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:29.286 [2024-06-07 21:14:51.804782] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.286 [2024-06-07 21:14:51.804861] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.286 [2024-06-07 21:14:51.804872] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:18:29.286 21:14:51 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.286 21:14:51 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:18:29.544 21:14:52 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:18:29.544 21:14:52 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:18:29.544 21:14:52 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:18:29.544 21:14:52 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:29.544 21:14:52 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:29.802 21:14:52 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:29.802 21:14:52 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:29.802 21:14:52 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:30.061 21:14:52 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:30.061 21:14:52 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:30.061 21:14:52 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:30.061 21:14:52 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:30.061 21:14:52 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:30.061 21:14:52 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:18:30.061 21:14:52 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:30.061 21:14:52 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:30.319 [2024-06-07 21:14:52.904932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:30.319 [2024-06-07 21:14:52.905081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.319 [2024-06-07 21:14:52.905116] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:18:30.319 [2024-06-07 21:14:52.905143] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.319 [2024-06-07 21:14:52.907413] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.319 [2024-06-07 21:14:52.907523] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:30.319 [2024-06-07 21:14:52.907645] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:30.319 [2024-06-07 21:14:52.907682] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:30.319 pt2 00:18:30.319 21:14:52 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:30.319 21:14:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:30.319 21:14:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:30.319 21:14:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:30.319 21:14:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:30.319 21:14:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:30.319 21:14:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.319 21:14:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.319 21:14:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.319 21:14:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.319 21:14:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.319 21:14:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.576 21:14:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.576 "name": "raid_bdev1", 00:18:30.576 "uuid": "79b1d350-2ebc-4fdc-a98e-1f701514c2c2", 00:18:30.576 "strip_size_kb": 0, 00:18:30.576 "state": "configuring", 00:18:30.576 "raid_level": "raid1", 00:18:30.576 "superblock": true, 00:18:30.576 "num_base_bdevs": 4, 00:18:30.576 "num_base_bdevs_discovered": 1, 00:18:30.576 "num_base_bdevs_operational": 3, 00:18:30.576 "base_bdevs_list": [ 00:18:30.576 { 00:18:30.576 "name": null, 00:18:30.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.576 "is_configured": false, 00:18:30.576 "data_offset": 2048, 00:18:30.576 "data_size": 63488 00:18:30.576 }, 00:18:30.576 { 00:18:30.576 "name": "pt2", 00:18:30.576 "uuid": "ecf636dc-9f01-5a46-be30-d5c42f2b6b01", 00:18:30.576 "is_configured": true, 00:18:30.576 "data_offset": 2048, 00:18:30.576 "data_size": 63488 00:18:30.576 }, 00:18:30.576 { 00:18:30.576 "name": null, 00:18:30.576 "uuid": "03ed0b71-faf4-54a1-8533-521e84943a20", 00:18:30.576 "is_configured": false, 00:18:30.576 "data_offset": 2048, 00:18:30.576 "data_size": 63488 00:18:30.576 }, 00:18:30.576 { 00:18:30.576 "name": null, 00:18:30.576 "uuid": "29febac5-cef6-5fcd-8f23-13f420e62df6", 00:18:30.576 "is_configured": false, 00:18:30.576 "data_offset": 2048, 00:18:30.576 "data_size": 63488 00:18:30.577 } 00:18:30.577 ] 00:18:30.577 }' 00:18:30.577 21:14:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.577 21:14:53 -- common/autotest_common.sh@10 -- # set +x 00:18:31.142 21:14:53 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:18:31.142 21:14:53 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:31.142 21:14:53 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:31.400 [2024-06-07 21:14:53.957195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:31.400 [2024-06-07 21:14:53.957312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.400 [2024-06-07 21:14:53.957354] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:31.400 [2024-06-07 21:14:53.957383] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.400 [2024-06-07 21:14:53.957842] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.400 [2024-06-07 21:14:53.957886] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:31.400 [2024-06-07 21:14:53.957969] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:31.400 [2024-06-07 21:14:53.957996] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:31.400 pt3 00:18:31.400 21:14:53 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:31.400 21:14:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:31.400 21:14:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:31.400 21:14:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:31.400 21:14:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:31.400 21:14:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:31.400 21:14:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:31.400 21:14:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:31.400 21:14:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:31.400 21:14:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:31.400 21:14:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.400 21:14:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.659 21:14:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:31.659 "name": "raid_bdev1", 00:18:31.659 "uuid": "79b1d350-2ebc-4fdc-a98e-1f701514c2c2", 00:18:31.659 "strip_size_kb": 0, 00:18:31.659 "state": "configuring", 00:18:31.659 "raid_level": "raid1", 00:18:31.659 "superblock": true, 00:18:31.659 "num_base_bdevs": 4, 00:18:31.659 "num_base_bdevs_discovered": 2, 00:18:31.659 "num_base_bdevs_operational": 3, 00:18:31.659 "base_bdevs_list": [ 00:18:31.659 { 00:18:31.659 "name": null, 00:18:31.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.659 "is_configured": false, 00:18:31.659 "data_offset": 2048, 00:18:31.659 "data_size": 63488 00:18:31.659 }, 00:18:31.659 { 00:18:31.659 "name": "pt2", 00:18:31.659 "uuid": "ecf636dc-9f01-5a46-be30-d5c42f2b6b01", 00:18:31.659 "is_configured": true, 00:18:31.659 "data_offset": 2048, 00:18:31.659 "data_size": 63488 00:18:31.659 }, 00:18:31.659 { 00:18:31.659 "name": "pt3", 00:18:31.659 "uuid": "03ed0b71-faf4-54a1-8533-521e84943a20", 00:18:31.659 "is_configured": true, 00:18:31.659 "data_offset": 2048, 00:18:31.659 "data_size": 63488 00:18:31.659 }, 00:18:31.659 { 00:18:31.659 "name": null, 00:18:31.659 "uuid": "29febac5-cef6-5fcd-8f23-13f420e62df6", 00:18:31.659 "is_configured": false, 00:18:31.659 "data_offset": 2048, 00:18:31.659 "data_size": 63488 00:18:31.659 } 00:18:31.659 ] 00:18:31.659 }' 00:18:31.659 21:14:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:31.659 21:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:32.262 21:14:54 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:18:32.262 21:14:54 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:32.262 21:14:54 -- bdev/bdev_raid.sh@462 -- # i=3 00:18:32.262 21:14:54 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:32.521 [2024-06-07 21:14:55.001544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:32.521 [2024-06-07 21:14:55.001658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.521 [2024-06-07 21:14:55.001699] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:32.521 [2024-06-07 21:14:55.001719] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.521 [2024-06-07 21:14:55.002193] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.521 [2024-06-07 21:14:55.002236] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:32.521 [2024-06-07 21:14:55.002349] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:32.522 [2024-06-07 21:14:55.002378] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:32.522 [2024-06-07 21:14:55.002504] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:18:32.522 [2024-06-07 21:14:55.002516] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:32.522 [2024-06-07 21:14:55.002625] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:32.522 [2024-06-07 21:14:55.002976] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:18:32.522 [2024-06-07 21:14:55.003012] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:18:32.522 [2024-06-07 21:14:55.003133] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.522 pt4 00:18:32.522 21:14:55 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:32.522 21:14:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:32.522 21:14:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:32.522 21:14:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:32.522 21:14:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:32.522 21:14:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:32.522 21:14:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:32.522 21:14:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:32.522 21:14:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:32.522 21:14:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:32.522 21:14:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.522 21:14:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.780 21:14:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:32.780 "name": "raid_bdev1", 00:18:32.780 "uuid": "79b1d350-2ebc-4fdc-a98e-1f701514c2c2", 00:18:32.780 "strip_size_kb": 0, 00:18:32.780 "state": "online", 00:18:32.780 "raid_level": "raid1", 00:18:32.780 "superblock": true, 00:18:32.780 "num_base_bdevs": 4, 00:18:32.780 "num_base_bdevs_discovered": 3, 00:18:32.780 "num_base_bdevs_operational": 3, 00:18:32.780 "base_bdevs_list": [ 00:18:32.780 { 00:18:32.780 "name": null, 00:18:32.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.780 "is_configured": false, 00:18:32.780 "data_offset": 2048, 00:18:32.780 "data_size": 63488 00:18:32.780 }, 00:18:32.780 { 00:18:32.780 "name": "pt2", 00:18:32.780 "uuid": "ecf636dc-9f01-5a46-be30-d5c42f2b6b01", 00:18:32.780 "is_configured": true, 00:18:32.780 "data_offset": 2048, 00:18:32.780 "data_size": 63488 00:18:32.780 }, 00:18:32.780 { 00:18:32.780 "name": "pt3", 00:18:32.780 "uuid": "03ed0b71-faf4-54a1-8533-521e84943a20", 00:18:32.780 "is_configured": true, 00:18:32.780 "data_offset": 2048, 00:18:32.780 "data_size": 63488 00:18:32.780 }, 00:18:32.780 { 00:18:32.780 "name": "pt4", 00:18:32.780 "uuid": "29febac5-cef6-5fcd-8f23-13f420e62df6", 00:18:32.780 "is_configured": true, 00:18:32.780 "data_offset": 2048, 00:18:32.780 "data_size": 63488 00:18:32.780 } 00:18:32.780 ] 00:18:32.780 }' 00:18:32.780 21:14:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:32.780 21:14:55 -- common/autotest_common.sh@10 -- # set +x 00:18:33.348 21:14:55 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:18:33.348 21:14:55 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:33.607 [2024-06-07 21:14:56.237797] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.607 [2024-06-07 21:14:56.237835] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.607 [2024-06-07 21:14:56.237926] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.607 [2024-06-07 21:14:56.238001] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.607 [2024-06-07 21:14:56.238011] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:18:33.607 21:14:56 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.607 21:14:56 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:18:33.865 21:14:56 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:18:33.865 21:14:56 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:18:33.865 21:14:56 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:34.124 [2024-06-07 21:14:56.697862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:34.124 [2024-06-07 21:14:56.697956] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.124 [2024-06-07 21:14:56.697995] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:34.124 [2024-06-07 21:14:56.698015] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.124 [2024-06-07 21:14:56.700110] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.124 [2024-06-07 21:14:56.700187] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:34.124 [2024-06-07 21:14:56.700263] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:34.124 [2024-06-07 21:14:56.700332] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:34.124 pt1 00:18:34.124 21:14:56 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:34.124 21:14:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:34.124 21:14:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:34.124 21:14:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:34.124 21:14:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:34.124 21:14:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:34.124 21:14:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:34.124 21:14:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:34.124 21:14:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:34.124 21:14:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:34.125 21:14:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.125 21:14:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.383 21:14:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:34.383 "name": "raid_bdev1", 00:18:34.383 "uuid": "79b1d350-2ebc-4fdc-a98e-1f701514c2c2", 00:18:34.383 "strip_size_kb": 0, 00:18:34.383 "state": "configuring", 00:18:34.383 "raid_level": "raid1", 00:18:34.383 "superblock": true, 00:18:34.383 "num_base_bdevs": 4, 00:18:34.383 "num_base_bdevs_discovered": 1, 00:18:34.383 "num_base_bdevs_operational": 4, 00:18:34.383 "base_bdevs_list": [ 00:18:34.383 { 00:18:34.383 "name": "pt1", 00:18:34.383 "uuid": "8be95550-7fe0-565a-8360-bc3318ac2801", 00:18:34.383 "is_configured": true, 00:18:34.383 "data_offset": 2048, 00:18:34.383 "data_size": 63488 00:18:34.383 }, 00:18:34.383 { 00:18:34.383 "name": null, 00:18:34.383 "uuid": "ecf636dc-9f01-5a46-be30-d5c42f2b6b01", 00:18:34.383 "is_configured": false, 00:18:34.383 "data_offset": 2048, 00:18:34.383 "data_size": 63488 00:18:34.383 }, 00:18:34.383 { 00:18:34.383 "name": null, 00:18:34.383 "uuid": "03ed0b71-faf4-54a1-8533-521e84943a20", 00:18:34.383 "is_configured": false, 00:18:34.383 "data_offset": 2048, 00:18:34.383 "data_size": 63488 00:18:34.383 }, 00:18:34.383 { 00:18:34.383 "name": null, 00:18:34.383 "uuid": "29febac5-cef6-5fcd-8f23-13f420e62df6", 00:18:34.383 "is_configured": false, 00:18:34.383 "data_offset": 2048, 00:18:34.383 "data_size": 63488 00:18:34.383 } 00:18:34.383 ] 00:18:34.383 }' 00:18:34.383 21:14:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:34.383 21:14:56 -- common/autotest_common.sh@10 -- # set +x 00:18:34.949 21:14:57 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:18:34.949 21:14:57 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:34.949 21:14:57 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:35.207 21:14:57 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:35.207 21:14:57 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:35.207 21:14:57 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:35.466 21:14:57 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:35.466 21:14:57 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:35.466 21:14:57 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:35.725 21:14:58 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:35.725 21:14:58 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:35.725 21:14:58 -- bdev/bdev_raid.sh@489 -- # i=3 00:18:35.725 21:14:58 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:35.983 [2024-06-07 21:14:58.450263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:35.983 [2024-06-07 21:14:58.450374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.983 [2024-06-07 21:14:58.450408] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:18:35.983 [2024-06-07 21:14:58.450433] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.983 [2024-06-07 21:14:58.450919] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.983 [2024-06-07 21:14:58.450990] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:35.983 [2024-06-07 21:14:58.451104] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:35.983 [2024-06-07 21:14:58.451120] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:35.983 [2024-06-07 21:14:58.451127] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:35.983 [2024-06-07 21:14:58.451157] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:18:35.983 [2024-06-07 21:14:58.451241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:35.983 pt4 00:18:35.983 21:14:58 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:35.983 21:14:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:35.983 21:14:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:35.983 21:14:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:35.983 21:14:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:35.983 21:14:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:35.983 21:14:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:35.983 21:14:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:35.983 21:14:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:35.983 21:14:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:35.983 21:14:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.983 21:14:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.241 21:14:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:36.241 "name": "raid_bdev1", 00:18:36.241 "uuid": "79b1d350-2ebc-4fdc-a98e-1f701514c2c2", 00:18:36.241 "strip_size_kb": 0, 00:18:36.241 "state": "configuring", 00:18:36.241 "raid_level": "raid1", 00:18:36.241 "superblock": true, 00:18:36.241 "num_base_bdevs": 4, 00:18:36.241 "num_base_bdevs_discovered": 1, 00:18:36.241 "num_base_bdevs_operational": 3, 00:18:36.241 "base_bdevs_list": [ 00:18:36.241 { 00:18:36.241 "name": null, 00:18:36.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.241 "is_configured": false, 00:18:36.241 "data_offset": 2048, 00:18:36.241 "data_size": 63488 00:18:36.241 }, 00:18:36.241 { 00:18:36.241 "name": null, 00:18:36.241 "uuid": "ecf636dc-9f01-5a46-be30-d5c42f2b6b01", 00:18:36.241 "is_configured": false, 00:18:36.241 "data_offset": 2048, 00:18:36.241 "data_size": 63488 00:18:36.241 }, 00:18:36.241 { 00:18:36.241 "name": null, 00:18:36.241 "uuid": "03ed0b71-faf4-54a1-8533-521e84943a20", 00:18:36.241 "is_configured": false, 00:18:36.241 "data_offset": 2048, 00:18:36.241 "data_size": 63488 00:18:36.241 }, 00:18:36.241 { 00:18:36.241 "name": "pt4", 00:18:36.241 "uuid": "29febac5-cef6-5fcd-8f23-13f420e62df6", 00:18:36.241 "is_configured": true, 00:18:36.241 "data_offset": 2048, 00:18:36.241 "data_size": 63488 00:18:36.241 } 00:18:36.241 ] 00:18:36.241 }' 00:18:36.241 21:14:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:36.241 21:14:58 -- common/autotest_common.sh@10 -- # set +x 00:18:36.807 21:14:59 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:18:36.807 21:14:59 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:36.807 21:14:59 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:37.066 [2024-06-07 21:14:59.582522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:37.066 [2024-06-07 21:14:59.582658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.066 [2024-06-07 21:14:59.582696] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:18:37.066 [2024-06-07 21:14:59.582722] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.066 [2024-06-07 21:14:59.583244] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.066 [2024-06-07 21:14:59.583307] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:37.066 [2024-06-07 21:14:59.583392] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:37.066 [2024-06-07 21:14:59.583421] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.066 pt2 00:18:37.066 21:14:59 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:37.066 21:14:59 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:37.066 21:14:59 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:37.323 [2024-06-07 21:14:59.842585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:37.323 [2024-06-07 21:14:59.842691] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.323 [2024-06-07 21:14:59.842725] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:18:37.323 [2024-06-07 21:14:59.842752] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.323 [2024-06-07 21:14:59.843267] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.323 [2024-06-07 21:14:59.843343] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:37.323 [2024-06-07 21:14:59.843459] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:37.323 [2024-06-07 21:14:59.843488] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:37.323 [2024-06-07 21:14:59.843635] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:18:37.324 [2024-06-07 21:14:59.843657] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:37.324 [2024-06-07 21:14:59.843748] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:18:37.324 [2024-06-07 21:14:59.844066] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:18:37.324 [2024-06-07 21:14:59.844090] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:18:37.324 [2024-06-07 21:14:59.844198] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.324 pt3 00:18:37.324 21:14:59 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:37.324 21:14:59 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:37.324 21:14:59 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:37.324 21:14:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:37.324 21:14:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:37.324 21:14:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:37.324 21:14:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:37.324 21:14:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:37.324 21:14:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:37.324 21:14:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:37.324 21:14:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:37.324 21:14:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:37.324 21:14:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.324 21:14:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.581 21:15:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:37.581 "name": "raid_bdev1", 00:18:37.581 "uuid": "79b1d350-2ebc-4fdc-a98e-1f701514c2c2", 00:18:37.581 "strip_size_kb": 0, 00:18:37.581 "state": "online", 00:18:37.581 "raid_level": "raid1", 00:18:37.581 "superblock": true, 00:18:37.581 "num_base_bdevs": 4, 00:18:37.581 "num_base_bdevs_discovered": 3, 00:18:37.581 "num_base_bdevs_operational": 3, 00:18:37.581 "base_bdevs_list": [ 00:18:37.581 { 00:18:37.581 "name": null, 00:18:37.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.581 "is_configured": false, 00:18:37.581 "data_offset": 2048, 00:18:37.581 "data_size": 63488 00:18:37.581 }, 00:18:37.581 { 00:18:37.581 "name": "pt2", 00:18:37.581 "uuid": "ecf636dc-9f01-5a46-be30-d5c42f2b6b01", 00:18:37.581 "is_configured": true, 00:18:37.581 "data_offset": 2048, 00:18:37.581 "data_size": 63488 00:18:37.581 }, 00:18:37.581 { 00:18:37.581 "name": "pt3", 00:18:37.581 "uuid": "03ed0b71-faf4-54a1-8533-521e84943a20", 00:18:37.581 "is_configured": true, 00:18:37.581 "data_offset": 2048, 00:18:37.581 "data_size": 63488 00:18:37.581 }, 00:18:37.581 { 00:18:37.581 "name": "pt4", 00:18:37.581 "uuid": "29febac5-cef6-5fcd-8f23-13f420e62df6", 00:18:37.581 "is_configured": true, 00:18:37.581 "data_offset": 2048, 00:18:37.581 "data_size": 63488 00:18:37.581 } 00:18:37.581 ] 00:18:37.582 }' 00:18:37.582 21:15:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:37.582 21:15:00 -- common/autotest_common.sh@10 -- # set +x 00:18:38.148 21:15:00 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:38.148 21:15:00 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:18:38.407 [2024-06-07 21:15:00.929739] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.407 21:15:00 -- bdev/bdev_raid.sh@506 -- # '[' 79b1d350-2ebc-4fdc-a98e-1f701514c2c2 '!=' 79b1d350-2ebc-4fdc-a98e-1f701514c2c2 ']' 00:18:38.407 21:15:00 -- bdev/bdev_raid.sh@511 -- # killprocess 135187 00:18:38.407 21:15:00 -- common/autotest_common.sh@926 -- # '[' -z 135187 ']' 00:18:38.407 21:15:00 -- common/autotest_common.sh@930 -- # kill -0 135187 00:18:38.407 21:15:00 -- common/autotest_common.sh@931 -- # uname 00:18:38.407 21:15:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:38.407 21:15:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135187 00:18:38.407 killing process with pid 135187 00:18:38.407 21:15:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:38.407 21:15:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:38.407 21:15:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135187' 00:18:38.407 21:15:00 -- common/autotest_common.sh@945 -- # kill 135187 00:18:38.407 21:15:00 -- common/autotest_common.sh@950 -- # wait 135187 00:18:38.407 [2024-06-07 21:15:00.962258] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.407 [2024-06-07 21:15:00.962336] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.407 [2024-06-07 21:15:00.962449] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.407 [2024-06-07 21:15:00.962469] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:18:38.407 [2024-06-07 21:15:01.001481] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.666 ************************************ 00:18:38.666 END TEST raid_superblock_test 00:18:38.666 ************************************ 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:38.666 00:18:38.666 real 0m21.234s 00:18:38.666 user 0m40.404s 00:18:38.666 sys 0m2.346s 00:18:38.666 21:15:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:38.666 21:15:01 -- common/autotest_common.sh@10 -- # set +x 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:18:38.666 21:15:01 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:18:38.666 21:15:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:38.666 21:15:01 -- common/autotest_common.sh@10 -- # set +x 00:18:38.666 ************************************ 00:18:38.666 START TEST raid_rebuild_test 00:18:38.666 ************************************ 00:18:38.666 21:15:01 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@544 -- # raid_pid=135897 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135897 /var/tmp/spdk-raid.sock 00:18:38.666 21:15:01 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:38.666 21:15:01 -- common/autotest_common.sh@819 -- # '[' -z 135897 ']' 00:18:38.666 21:15:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:38.666 21:15:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:38.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:38.666 21:15:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:38.666 21:15:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:38.666 21:15:01 -- common/autotest_common.sh@10 -- # set +x 00:18:38.925 [2024-06-07 21:15:01.351495] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:38.925 [2024-06-07 21:15:01.351792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135897 ] 00:18:38.925 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:38.925 Zero copy mechanism will not be used. 00:18:38.925 [2024-06-07 21:15:01.514139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.925 [2024-06-07 21:15:01.593347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.183 [2024-06-07 21:15:01.647705] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.749 21:15:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:39.749 21:15:02 -- common/autotest_common.sh@852 -- # return 0 00:18:39.749 21:15:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:39.749 21:15:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:18:39.749 21:15:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:40.011 BaseBdev1 00:18:40.011 21:15:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:40.011 21:15:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:18:40.011 21:15:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:40.276 BaseBdev2 00:18:40.276 21:15:02 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:40.542 spare_malloc 00:18:40.542 21:15:03 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:40.807 spare_delay 00:18:40.807 21:15:03 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:40.807 [2024-06-07 21:15:03.432189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:40.807 [2024-06-07 21:15:03.432438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.807 [2024-06-07 21:15:03.432589] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:40.807 [2024-06-07 21:15:03.432746] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.807 [2024-06-07 21:15:03.434983] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.807 [2024-06-07 21:15:03.435142] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:40.807 spare 00:18:40.807 21:15:03 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:41.063 [2024-06-07 21:15:03.636361] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.063 [2024-06-07 21:15:03.638509] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:41.063 [2024-06-07 21:15:03.638761] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:18:41.063 [2024-06-07 21:15:03.638871] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:41.063 [2024-06-07 21:15:03.639091] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:41.063 [2024-06-07 21:15:03.639622] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:18:41.063 [2024-06-07 21:15:03.639802] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:18:41.063 [2024-06-07 21:15:03.640061] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.063 21:15:03 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:41.063 21:15:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:41.063 21:15:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:41.063 21:15:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:41.063 21:15:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:41.063 21:15:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:41.063 21:15:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.063 21:15:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.063 21:15:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.063 21:15:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.063 21:15:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.063 21:15:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.321 21:15:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.321 "name": "raid_bdev1", 00:18:41.321 "uuid": "7073c32e-1ba4-40bf-b6a4-3853095fe4cd", 00:18:41.321 "strip_size_kb": 0, 00:18:41.321 "state": "online", 00:18:41.321 "raid_level": "raid1", 00:18:41.321 "superblock": false, 00:18:41.321 "num_base_bdevs": 2, 00:18:41.321 "num_base_bdevs_discovered": 2, 00:18:41.321 "num_base_bdevs_operational": 2, 00:18:41.321 "base_bdevs_list": [ 00:18:41.321 { 00:18:41.321 "name": "BaseBdev1", 00:18:41.321 "uuid": "2501d5ce-d1ae-4eb9-b9ad-ced4b22dce9e", 00:18:41.321 "is_configured": true, 00:18:41.321 "data_offset": 0, 00:18:41.321 "data_size": 65536 00:18:41.321 }, 00:18:41.321 { 00:18:41.321 "name": "BaseBdev2", 00:18:41.321 "uuid": "b3d58744-a0f4-4519-b5b4-cdd4d830e979", 00:18:41.321 "is_configured": true, 00:18:41.321 "data_offset": 0, 00:18:41.321 "data_size": 65536 00:18:41.321 } 00:18:41.321 ] 00:18:41.321 }' 00:18:41.321 21:15:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.321 21:15:03 -- common/autotest_common.sh@10 -- # set +x 00:18:41.886 21:15:04 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:41.886 21:15:04 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:18:42.145 [2024-06-07 21:15:04.744869] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:42.145 21:15:04 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:18:42.145 21:15:04 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.145 21:15:04 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:42.403 21:15:05 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:18:42.403 21:15:05 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:18:42.403 21:15:05 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:18:42.403 21:15:05 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:18:42.403 21:15:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:42.403 21:15:05 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:18:42.403 21:15:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:42.403 21:15:05 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:18:42.403 21:15:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:42.403 21:15:05 -- bdev/nbd_common.sh@12 -- # local i 00:18:42.403 21:15:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:42.403 21:15:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:42.403 21:15:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:42.662 [2024-06-07 21:15:05.208852] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:42.662 /dev/nbd0 00:18:42.662 21:15:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:42.662 21:15:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:42.662 21:15:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:18:42.662 21:15:05 -- common/autotest_common.sh@857 -- # local i 00:18:42.662 21:15:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:18:42.662 21:15:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:18:42.662 21:15:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:18:42.662 21:15:05 -- common/autotest_common.sh@861 -- # break 00:18:42.662 21:15:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:18:42.662 21:15:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:18:42.662 21:15:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:42.662 1+0 records in 00:18:42.662 1+0 records out 00:18:42.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323188 s, 12.7 MB/s 00:18:42.662 21:15:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.662 21:15:05 -- common/autotest_common.sh@874 -- # size=4096 00:18:42.662 21:15:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.662 21:15:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:18:42.662 21:15:05 -- common/autotest_common.sh@877 -- # return 0 00:18:42.662 21:15:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:42.662 21:15:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:42.662 21:15:05 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:18:42.663 21:15:05 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:18:42.663 21:15:05 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:18:47.932 65536+0 records in 00:18:47.932 65536+0 records out 00:18:47.932 33554432 bytes (34 MB, 32 MiB) copied, 4.57153 s, 7.3 MB/s 00:18:47.932 21:15:09 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:47.932 21:15:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:47.932 21:15:09 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:18:47.932 21:15:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:47.932 21:15:09 -- bdev/nbd_common.sh@51 -- # local i 00:18:47.932 21:15:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:47.932 21:15:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:47.932 21:15:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:47.932 21:15:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:47.932 21:15:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:47.932 21:15:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:47.932 21:15:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.932 21:15:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:47.932 21:15:10 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:18:47.932 [2024-06-07 21:15:10.059269] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.932 21:15:10 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:18:47.932 21:15:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.932 21:15:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:47.932 21:15:10 -- bdev/nbd_common.sh@41 -- # break 00:18:47.932 21:15:10 -- bdev/nbd_common.sh@45 -- # return 0 00:18:47.932 21:15:10 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:47.932 [2024-06-07 21:15:10.407035] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:47.932 21:15:10 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:47.932 21:15:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:47.932 21:15:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:47.932 21:15:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:47.932 21:15:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:47.932 21:15:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:47.932 21:15:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.932 21:15:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.932 21:15:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.932 21:15:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.932 21:15:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.932 21:15:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.190 21:15:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.190 "name": "raid_bdev1", 00:18:48.190 "uuid": "7073c32e-1ba4-40bf-b6a4-3853095fe4cd", 00:18:48.190 "strip_size_kb": 0, 00:18:48.190 "state": "online", 00:18:48.190 "raid_level": "raid1", 00:18:48.190 "superblock": false, 00:18:48.190 "num_base_bdevs": 2, 00:18:48.190 "num_base_bdevs_discovered": 1, 00:18:48.190 "num_base_bdevs_operational": 1, 00:18:48.190 "base_bdevs_list": [ 00:18:48.190 { 00:18:48.190 "name": null, 00:18:48.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.190 "is_configured": false, 00:18:48.190 "data_offset": 0, 00:18:48.190 "data_size": 65536 00:18:48.190 }, 00:18:48.190 { 00:18:48.190 "name": "BaseBdev2", 00:18:48.190 "uuid": "b3d58744-a0f4-4519-b5b4-cdd4d830e979", 00:18:48.190 "is_configured": true, 00:18:48.190 "data_offset": 0, 00:18:48.190 "data_size": 65536 00:18:48.190 } 00:18:48.190 ] 00:18:48.190 }' 00:18:48.190 21:15:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.190 21:15:10 -- common/autotest_common.sh@10 -- # set +x 00:18:48.757 21:15:11 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:49.015 [2024-06-07 21:15:11.535331] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:18:49.015 [2024-06-07 21:15:11.535422] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.015 [2024-06-07 21:15:11.541085] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b500 00:18:49.015 [2024-06-07 21:15:11.543155] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:49.015 21:15:11 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:18:49.949 21:15:12 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.949 21:15:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:49.949 21:15:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:49.949 21:15:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:49.949 21:15:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:49.949 21:15:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.949 21:15:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.207 21:15:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:50.207 "name": "raid_bdev1", 00:18:50.207 "uuid": "7073c32e-1ba4-40bf-b6a4-3853095fe4cd", 00:18:50.207 "strip_size_kb": 0, 00:18:50.208 "state": "online", 00:18:50.208 "raid_level": "raid1", 00:18:50.208 "superblock": false, 00:18:50.208 "num_base_bdevs": 2, 00:18:50.208 "num_base_bdevs_discovered": 2, 00:18:50.208 "num_base_bdevs_operational": 2, 00:18:50.208 "process": { 00:18:50.208 "type": "rebuild", 00:18:50.208 "target": "spare", 00:18:50.208 "progress": { 00:18:50.208 "blocks": 24576, 00:18:50.208 "percent": 37 00:18:50.208 } 00:18:50.208 }, 00:18:50.208 "base_bdevs_list": [ 00:18:50.208 { 00:18:50.208 "name": "spare", 00:18:50.208 "uuid": "b0f3676e-17c5-58d2-9eed-d5318352dc66", 00:18:50.208 "is_configured": true, 00:18:50.208 "data_offset": 0, 00:18:50.208 "data_size": 65536 00:18:50.208 }, 00:18:50.208 { 00:18:50.208 "name": "BaseBdev2", 00:18:50.208 "uuid": "b3d58744-a0f4-4519-b5b4-cdd4d830e979", 00:18:50.208 "is_configured": true, 00:18:50.208 "data_offset": 0, 00:18:50.208 "data_size": 65536 00:18:50.208 } 00:18:50.208 ] 00:18:50.208 }' 00:18:50.208 21:15:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:50.208 21:15:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.208 21:15:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:50.471 21:15:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.471 21:15:12 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:50.732 [2024-06-07 21:15:13.145750] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.732 [2024-06-07 21:15:13.153028] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:50.732 [2024-06-07 21:15:13.153125] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.732 21:15:13 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:50.732 21:15:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:50.732 21:15:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:50.732 21:15:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:50.732 21:15:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:50.732 21:15:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:50.732 21:15:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:50.732 21:15:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:50.732 21:15:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:50.732 21:15:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:50.732 21:15:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.732 21:15:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.732 21:15:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.732 "name": "raid_bdev1", 00:18:50.732 "uuid": "7073c32e-1ba4-40bf-b6a4-3853095fe4cd", 00:18:50.732 "strip_size_kb": 0, 00:18:50.732 "state": "online", 00:18:50.732 "raid_level": "raid1", 00:18:50.732 "superblock": false, 00:18:50.732 "num_base_bdevs": 2, 00:18:50.732 "num_base_bdevs_discovered": 1, 00:18:50.732 "num_base_bdevs_operational": 1, 00:18:50.732 "base_bdevs_list": [ 00:18:50.732 { 00:18:50.732 "name": null, 00:18:50.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.732 "is_configured": false, 00:18:50.732 "data_offset": 0, 00:18:50.732 "data_size": 65536 00:18:50.732 }, 00:18:50.732 { 00:18:50.732 "name": "BaseBdev2", 00:18:50.732 "uuid": "b3d58744-a0f4-4519-b5b4-cdd4d830e979", 00:18:50.732 "is_configured": true, 00:18:50.732 "data_offset": 0, 00:18:50.732 "data_size": 65536 00:18:50.732 } 00:18:50.732 ] 00:18:50.732 }' 00:18:50.732 21:15:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.732 21:15:13 -- common/autotest_common.sh@10 -- # set +x 00:18:51.667 21:15:14 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.667 21:15:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:51.667 21:15:14 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:51.667 21:15:14 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:51.667 21:15:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:51.667 21:15:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.667 21:15:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.667 21:15:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:51.667 "name": "raid_bdev1", 00:18:51.667 "uuid": "7073c32e-1ba4-40bf-b6a4-3853095fe4cd", 00:18:51.667 "strip_size_kb": 0, 00:18:51.667 "state": "online", 00:18:51.667 "raid_level": "raid1", 00:18:51.667 "superblock": false, 00:18:51.667 "num_base_bdevs": 2, 00:18:51.667 "num_base_bdevs_discovered": 1, 00:18:51.667 "num_base_bdevs_operational": 1, 00:18:51.667 "base_bdevs_list": [ 00:18:51.667 { 00:18:51.667 "name": null, 00:18:51.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.667 "is_configured": false, 00:18:51.667 "data_offset": 0, 00:18:51.667 "data_size": 65536 00:18:51.667 }, 00:18:51.667 { 00:18:51.667 "name": "BaseBdev2", 00:18:51.667 "uuid": "b3d58744-a0f4-4519-b5b4-cdd4d830e979", 00:18:51.667 "is_configured": true, 00:18:51.667 "data_offset": 0, 00:18:51.667 "data_size": 65536 00:18:51.667 } 00:18:51.667 ] 00:18:51.667 }' 00:18:51.667 21:15:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:51.667 21:15:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:51.667 21:15:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:51.925 21:15:14 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:51.925 21:15:14 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:52.183 [2024-06-07 21:15:14.605414] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:18:52.183 [2024-06-07 21:15:14.605479] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:52.184 [2024-06-07 21:15:14.610693] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:18:52.184 [2024-06-07 21:15:14.612792] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:52.184 21:15:14 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:18:53.119 21:15:15 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.119 21:15:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:53.119 21:15:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:53.119 21:15:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:53.119 21:15:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:53.119 21:15:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.119 21:15:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.378 21:15:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:53.378 "name": "raid_bdev1", 00:18:53.378 "uuid": "7073c32e-1ba4-40bf-b6a4-3853095fe4cd", 00:18:53.378 "strip_size_kb": 0, 00:18:53.378 "state": "online", 00:18:53.378 "raid_level": "raid1", 00:18:53.378 "superblock": false, 00:18:53.378 "num_base_bdevs": 2, 00:18:53.378 "num_base_bdevs_discovered": 2, 00:18:53.378 "num_base_bdevs_operational": 2, 00:18:53.378 "process": { 00:18:53.378 "type": "rebuild", 00:18:53.378 "target": "spare", 00:18:53.378 "progress": { 00:18:53.378 "blocks": 24576, 00:18:53.378 "percent": 37 00:18:53.378 } 00:18:53.378 }, 00:18:53.378 "base_bdevs_list": [ 00:18:53.378 { 00:18:53.378 "name": "spare", 00:18:53.378 "uuid": "b0f3676e-17c5-58d2-9eed-d5318352dc66", 00:18:53.378 "is_configured": true, 00:18:53.378 "data_offset": 0, 00:18:53.378 "data_size": 65536 00:18:53.378 }, 00:18:53.378 { 00:18:53.378 "name": "BaseBdev2", 00:18:53.378 "uuid": "b3d58744-a0f4-4519-b5b4-cdd4d830e979", 00:18:53.379 "is_configured": true, 00:18:53.379 "data_offset": 0, 00:18:53.379 "data_size": 65536 00:18:53.379 } 00:18:53.379 ] 00:18:53.379 }' 00:18:53.379 21:15:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:53.379 21:15:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.379 21:15:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:53.379 21:15:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.379 21:15:16 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:18:53.379 21:15:16 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:18:53.379 21:15:16 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:18:53.379 21:15:16 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:18:53.379 21:15:16 -- bdev/bdev_raid.sh@657 -- # local timeout=376 00:18:53.379 21:15:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:53.379 21:15:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.379 21:15:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:53.379 21:15:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:53.379 21:15:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:53.379 21:15:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:53.379 21:15:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.379 21:15:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.637 21:15:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:53.637 "name": "raid_bdev1", 00:18:53.637 "uuid": "7073c32e-1ba4-40bf-b6a4-3853095fe4cd", 00:18:53.637 "strip_size_kb": 0, 00:18:53.637 "state": "online", 00:18:53.637 "raid_level": "raid1", 00:18:53.637 "superblock": false, 00:18:53.637 "num_base_bdevs": 2, 00:18:53.637 "num_base_bdevs_discovered": 2, 00:18:53.637 "num_base_bdevs_operational": 2, 00:18:53.637 "process": { 00:18:53.637 "type": "rebuild", 00:18:53.637 "target": "spare", 00:18:53.637 "progress": { 00:18:53.637 "blocks": 32768, 00:18:53.637 "percent": 50 00:18:53.637 } 00:18:53.637 }, 00:18:53.637 "base_bdevs_list": [ 00:18:53.637 { 00:18:53.637 "name": "spare", 00:18:53.637 "uuid": "b0f3676e-17c5-58d2-9eed-d5318352dc66", 00:18:53.637 "is_configured": true, 00:18:53.637 "data_offset": 0, 00:18:53.637 "data_size": 65536 00:18:53.637 }, 00:18:53.637 { 00:18:53.637 "name": "BaseBdev2", 00:18:53.637 "uuid": "b3d58744-a0f4-4519-b5b4-cdd4d830e979", 00:18:53.637 "is_configured": true, 00:18:53.637 "data_offset": 0, 00:18:53.637 "data_size": 65536 00:18:53.637 } 00:18:53.637 ] 00:18:53.637 }' 00:18:53.637 21:15:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:53.637 21:15:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.637 21:15:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:53.896 21:15:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.896 21:15:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:54.831 21:15:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:54.831 21:15:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.831 21:15:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:54.831 21:15:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:54.831 21:15:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:54.831 21:15:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:54.831 21:15:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.831 21:15:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.089 21:15:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:55.089 "name": "raid_bdev1", 00:18:55.089 "uuid": "7073c32e-1ba4-40bf-b6a4-3853095fe4cd", 00:18:55.089 "strip_size_kb": 0, 00:18:55.089 "state": "online", 00:18:55.089 "raid_level": "raid1", 00:18:55.089 "superblock": false, 00:18:55.089 "num_base_bdevs": 2, 00:18:55.089 "num_base_bdevs_discovered": 2, 00:18:55.089 "num_base_bdevs_operational": 2, 00:18:55.089 "process": { 00:18:55.089 "type": "rebuild", 00:18:55.089 "target": "spare", 00:18:55.089 "progress": { 00:18:55.089 "blocks": 59392, 00:18:55.089 "percent": 90 00:18:55.089 } 00:18:55.089 }, 00:18:55.089 "base_bdevs_list": [ 00:18:55.089 { 00:18:55.089 "name": "spare", 00:18:55.089 "uuid": "b0f3676e-17c5-58d2-9eed-d5318352dc66", 00:18:55.089 "is_configured": true, 00:18:55.089 "data_offset": 0, 00:18:55.089 "data_size": 65536 00:18:55.089 }, 00:18:55.089 { 00:18:55.089 "name": "BaseBdev2", 00:18:55.089 "uuid": "b3d58744-a0f4-4519-b5b4-cdd4d830e979", 00:18:55.089 "is_configured": true, 00:18:55.089 "data_offset": 0, 00:18:55.089 "data_size": 65536 00:18:55.089 } 00:18:55.089 ] 00:18:55.089 }' 00:18:55.089 21:15:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:55.089 21:15:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.089 21:15:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:55.089 21:15:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.089 21:15:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:55.348 [2024-06-07 21:15:17.830503] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:55.348 [2024-06-07 21:15:17.830593] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:55.348 [2024-06-07 21:15:17.830682] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.282 21:15:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:56.282 21:15:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.282 21:15:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:56.282 21:15:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:56.282 21:15:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:56.282 21:15:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:56.282 21:15:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.282 21:15:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.282 21:15:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:56.282 "name": "raid_bdev1", 00:18:56.282 "uuid": "7073c32e-1ba4-40bf-b6a4-3853095fe4cd", 00:18:56.282 "strip_size_kb": 0, 00:18:56.282 "state": "online", 00:18:56.282 "raid_level": "raid1", 00:18:56.282 "superblock": false, 00:18:56.282 "num_base_bdevs": 2, 00:18:56.282 "num_base_bdevs_discovered": 2, 00:18:56.282 "num_base_bdevs_operational": 2, 00:18:56.282 "base_bdevs_list": [ 00:18:56.282 { 00:18:56.282 "name": "spare", 00:18:56.282 "uuid": "b0f3676e-17c5-58d2-9eed-d5318352dc66", 00:18:56.282 "is_configured": true, 00:18:56.282 "data_offset": 0, 00:18:56.282 "data_size": 65536 00:18:56.282 }, 00:18:56.282 { 00:18:56.282 "name": "BaseBdev2", 00:18:56.282 "uuid": "b3d58744-a0f4-4519-b5b4-cdd4d830e979", 00:18:56.282 "is_configured": true, 00:18:56.282 "data_offset": 0, 00:18:56.282 "data_size": 65536 00:18:56.282 } 00:18:56.282 ] 00:18:56.282 }' 00:18:56.282 21:15:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:56.540 21:15:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:56.540 21:15:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:56.540 21:15:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:18:56.540 21:15:19 -- bdev/bdev_raid.sh@660 -- # break 00:18:56.540 21:15:19 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:56.540 21:15:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:56.540 21:15:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:56.540 21:15:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:56.540 21:15:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:56.540 21:15:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.540 21:15:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:56.798 "name": "raid_bdev1", 00:18:56.798 "uuid": "7073c32e-1ba4-40bf-b6a4-3853095fe4cd", 00:18:56.798 "strip_size_kb": 0, 00:18:56.798 "state": "online", 00:18:56.798 "raid_level": "raid1", 00:18:56.798 "superblock": false, 00:18:56.798 "num_base_bdevs": 2, 00:18:56.798 "num_base_bdevs_discovered": 2, 00:18:56.798 "num_base_bdevs_operational": 2, 00:18:56.798 "base_bdevs_list": [ 00:18:56.798 { 00:18:56.798 "name": "spare", 00:18:56.798 "uuid": "b0f3676e-17c5-58d2-9eed-d5318352dc66", 00:18:56.798 "is_configured": true, 00:18:56.798 "data_offset": 0, 00:18:56.798 "data_size": 65536 00:18:56.798 }, 00:18:56.798 { 00:18:56.798 "name": "BaseBdev2", 00:18:56.798 "uuid": "b3d58744-a0f4-4519-b5b4-cdd4d830e979", 00:18:56.798 "is_configured": true, 00:18:56.798 "data_offset": 0, 00:18:56.798 "data_size": 65536 00:18:56.798 } 00:18:56.798 ] 00:18:56.798 }' 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.798 21:15:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.072 21:15:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:57.072 "name": "raid_bdev1", 00:18:57.072 "uuid": "7073c32e-1ba4-40bf-b6a4-3853095fe4cd", 00:18:57.072 "strip_size_kb": 0, 00:18:57.072 "state": "online", 00:18:57.072 "raid_level": "raid1", 00:18:57.072 "superblock": false, 00:18:57.072 "num_base_bdevs": 2, 00:18:57.072 "num_base_bdevs_discovered": 2, 00:18:57.072 "num_base_bdevs_operational": 2, 00:18:57.072 "base_bdevs_list": [ 00:18:57.072 { 00:18:57.072 "name": "spare", 00:18:57.072 "uuid": "b0f3676e-17c5-58d2-9eed-d5318352dc66", 00:18:57.072 "is_configured": true, 00:18:57.072 "data_offset": 0, 00:18:57.072 "data_size": 65536 00:18:57.072 }, 00:18:57.072 { 00:18:57.072 "name": "BaseBdev2", 00:18:57.072 "uuid": "b3d58744-a0f4-4519-b5b4-cdd4d830e979", 00:18:57.072 "is_configured": true, 00:18:57.072 "data_offset": 0, 00:18:57.072 "data_size": 65536 00:18:57.072 } 00:18:57.072 ] 00:18:57.072 }' 00:18:57.072 21:15:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:57.072 21:15:19 -- common/autotest_common.sh@10 -- # set +x 00:18:57.645 21:15:20 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:57.904 [2024-06-07 21:15:20.525447] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:57.904 [2024-06-07 21:15:20.525504] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:57.904 [2024-06-07 21:15:20.525647] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.904 [2024-06-07 21:15:20.525761] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.904 [2024-06-07 21:15:20.525785] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:18:57.904 21:15:20 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.904 21:15:20 -- bdev/bdev_raid.sh@671 -- # jq length 00:18:58.161 21:15:20 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:18:58.161 21:15:20 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:18:58.161 21:15:20 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:58.161 21:15:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:58.161 21:15:20 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:18:58.161 21:15:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:58.161 21:15:20 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:18:58.161 21:15:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:58.161 21:15:20 -- bdev/nbd_common.sh@12 -- # local i 00:18:58.161 21:15:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:58.161 21:15:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:58.161 21:15:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:58.419 /dev/nbd0 00:18:58.419 21:15:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:58.419 21:15:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:58.420 21:15:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:18:58.420 21:15:20 -- common/autotest_common.sh@857 -- # local i 00:18:58.420 21:15:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:18:58.420 21:15:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:18:58.420 21:15:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:18:58.420 21:15:20 -- common/autotest_common.sh@861 -- # break 00:18:58.420 21:15:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:18:58.420 21:15:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:18:58.420 21:15:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:58.420 1+0 records in 00:18:58.420 1+0 records out 00:18:58.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369499 s, 11.1 MB/s 00:18:58.420 21:15:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:58.420 21:15:20 -- common/autotest_common.sh@874 -- # size=4096 00:18:58.420 21:15:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:58.420 21:15:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:18:58.420 21:15:20 -- common/autotest_common.sh@877 -- # return 0 00:18:58.420 21:15:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:58.420 21:15:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:58.420 21:15:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:18:58.678 /dev/nbd1 00:18:58.678 21:15:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:58.678 21:15:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:58.678 21:15:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:18:58.678 21:15:21 -- common/autotest_common.sh@857 -- # local i 00:18:58.678 21:15:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:18:58.678 21:15:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:18:58.678 21:15:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:18:58.678 21:15:21 -- common/autotest_common.sh@861 -- # break 00:18:58.678 21:15:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:18:58.678 21:15:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:18:58.678 21:15:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:58.678 1+0 records in 00:18:58.678 1+0 records out 00:18:58.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401399 s, 10.2 MB/s 00:18:58.678 21:15:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:58.678 21:15:21 -- common/autotest_common.sh@874 -- # size=4096 00:18:58.678 21:15:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:58.678 21:15:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:18:58.678 21:15:21 -- common/autotest_common.sh@877 -- # return 0 00:18:58.678 21:15:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:58.678 21:15:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:58.678 21:15:21 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:58.678 21:15:21 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:18:58.678 21:15:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:58.678 21:15:21 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:18:58.678 21:15:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:58.678 21:15:21 -- bdev/nbd_common.sh@51 -- # local i 00:18:58.678 21:15:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:58.678 21:15:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:58.937 21:15:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:58.937 21:15:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:58.937 21:15:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:58.937 21:15:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:58.937 21:15:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:58.937 21:15:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:58.937 21:15:21 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:18:59.195 21:15:21 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:18:59.195 21:15:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:59.195 21:15:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:59.195 21:15:21 -- bdev/nbd_common.sh@41 -- # break 00:18:59.195 21:15:21 -- bdev/nbd_common.sh@45 -- # return 0 00:18:59.195 21:15:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:59.195 21:15:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:59.453 21:15:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:59.454 21:15:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:59.454 21:15:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:59.454 21:15:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:59.454 21:15:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:59.454 21:15:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:59.454 21:15:21 -- bdev/nbd_common.sh@41 -- # break 00:18:59.454 21:15:21 -- bdev/nbd_common.sh@45 -- # return 0 00:18:59.454 21:15:21 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:18:59.454 21:15:21 -- bdev/bdev_raid.sh@709 -- # killprocess 135897 00:18:59.454 21:15:21 -- common/autotest_common.sh@926 -- # '[' -z 135897 ']' 00:18:59.454 21:15:21 -- common/autotest_common.sh@930 -- # kill -0 135897 00:18:59.454 21:15:21 -- common/autotest_common.sh@931 -- # uname 00:18:59.454 21:15:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:59.454 21:15:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135897 00:18:59.454 killing process with pid 135897 00:18:59.454 Received shutdown signal, test time was about 60.000000 seconds 00:18:59.454 00:18:59.454 Latency(us) 00:18:59.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.454 =================================================================================================================== 00:18:59.454 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:59.454 21:15:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:59.454 21:15:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:59.454 21:15:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135897' 00:18:59.454 21:15:21 -- common/autotest_common.sh@945 -- # kill 135897 00:18:59.454 21:15:21 -- common/autotest_common.sh@950 -- # wait 135897 00:18:59.454 [2024-06-07 21:15:21.970173] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:59.454 [2024-06-07 21:15:22.007704] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:59.712 ************************************ 00:18:59.712 END TEST raid_rebuild_test 00:18:59.712 ************************************ 00:18:59.712 21:15:22 -- bdev/bdev_raid.sh@711 -- # return 0 00:18:59.712 00:18:59.712 real 0m21.043s 00:18:59.712 user 0m29.620s 00:18:59.712 sys 0m3.751s 00:18:59.712 21:15:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.712 21:15:22 -- common/autotest_common.sh@10 -- # set +x 00:18:59.712 21:15:22 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:18:59.712 21:15:22 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:18:59.712 21:15:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:59.712 21:15:22 -- common/autotest_common.sh@10 -- # set +x 00:18:59.970 ************************************ 00:18:59.970 START TEST raid_rebuild_test_sb 00:18:59.970 ************************************ 00:18:59.970 21:15:22 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@544 -- # raid_pid=136473 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136473 /var/tmp/spdk-raid.sock 00:18:59.970 21:15:22 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:59.970 21:15:22 -- common/autotest_common.sh@819 -- # '[' -z 136473 ']' 00:18:59.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:59.970 21:15:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:59.970 21:15:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:59.970 21:15:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:59.970 21:15:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:59.970 21:15:22 -- common/autotest_common.sh@10 -- # set +x 00:18:59.970 [2024-06-07 21:15:22.453625] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:59.970 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:59.970 Zero copy mechanism will not be used. 00:18:59.970 [2024-06-07 21:15:22.453859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136473 ] 00:18:59.970 [2024-06-07 21:15:22.615095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.228 [2024-06-07 21:15:22.693560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.228 [2024-06-07 21:15:22.769835] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.801 21:15:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:00.801 21:15:23 -- common/autotest_common.sh@852 -- # return 0 00:19:00.801 21:15:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:00.801 21:15:23 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:00.801 21:15:23 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:01.062 BaseBdev1_malloc 00:19:01.062 21:15:23 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:01.320 [2024-06-07 21:15:23.854553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:01.320 [2024-06-07 21:15:23.854695] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.320 [2024-06-07 21:15:23.854741] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:01.320 [2024-06-07 21:15:23.854795] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.321 [2024-06-07 21:15:23.857493] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.321 [2024-06-07 21:15:23.857545] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:01.321 BaseBdev1 00:19:01.321 21:15:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:01.321 21:15:23 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:01.321 21:15:23 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:01.578 BaseBdev2_malloc 00:19:01.578 21:15:24 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:01.836 [2024-06-07 21:15:24.310380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:01.836 [2024-06-07 21:15:24.310489] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.836 [2024-06-07 21:15:24.310549] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:01.836 [2024-06-07 21:15:24.310615] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.836 [2024-06-07 21:15:24.313279] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.836 [2024-06-07 21:15:24.313380] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:01.836 BaseBdev2 00:19:01.836 21:15:24 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:02.094 spare_malloc 00:19:02.094 21:15:24 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:02.094 spare_delay 00:19:02.094 21:15:24 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:02.352 [2024-06-07 21:15:24.927472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:02.352 [2024-06-07 21:15:24.927596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.352 [2024-06-07 21:15:24.927647] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:02.352 [2024-06-07 21:15:24.927722] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.352 [2024-06-07 21:15:24.930398] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.352 [2024-06-07 21:15:24.930740] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:02.352 spare 00:19:02.352 21:15:24 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:02.611 [2024-06-07 21:15:25.179823] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:02.611 [2024-06-07 21:15:25.182310] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:02.611 [2024-06-07 21:15:25.182696] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:19:02.611 [2024-06-07 21:15:25.182854] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:02.611 [2024-06-07 21:15:25.183110] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:02.611 [2024-06-07 21:15:25.183849] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:19:02.611 [2024-06-07 21:15:25.184010] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:19:02.611 [2024-06-07 21:15:25.184346] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.611 21:15:25 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:02.611 21:15:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:02.611 21:15:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:02.611 21:15:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:02.611 21:15:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:02.611 21:15:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:02.611 21:15:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.611 21:15:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.611 21:15:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.611 21:15:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.611 21:15:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.611 21:15:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.868 21:15:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:02.868 "name": "raid_bdev1", 00:19:02.868 "uuid": "a3991b17-f7ad-49b3-9b38-1e1b95cae968", 00:19:02.868 "strip_size_kb": 0, 00:19:02.868 "state": "online", 00:19:02.868 "raid_level": "raid1", 00:19:02.868 "superblock": true, 00:19:02.868 "num_base_bdevs": 2, 00:19:02.868 "num_base_bdevs_discovered": 2, 00:19:02.868 "num_base_bdevs_operational": 2, 00:19:02.868 "base_bdevs_list": [ 00:19:02.868 { 00:19:02.868 "name": "BaseBdev1", 00:19:02.868 "uuid": "49b6925b-ce47-53e2-9a63-5ceb8ec1c61c", 00:19:02.868 "is_configured": true, 00:19:02.868 "data_offset": 2048, 00:19:02.868 "data_size": 63488 00:19:02.868 }, 00:19:02.868 { 00:19:02.868 "name": "BaseBdev2", 00:19:02.868 "uuid": "9ef7473b-8ed1-5f92-85a5-89738027696b", 00:19:02.868 "is_configured": true, 00:19:02.868 "data_offset": 2048, 00:19:02.868 "data_size": 63488 00:19:02.868 } 00:19:02.868 ] 00:19:02.868 }' 00:19:02.868 21:15:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:02.868 21:15:25 -- common/autotest_common.sh@10 -- # set +x 00:19:03.435 21:15:26 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:03.435 21:15:26 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:03.693 [2024-06-07 21:15:26.189026] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.693 21:15:26 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:19:03.693 21:15:26 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.693 21:15:26 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:03.951 21:15:26 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:19:03.951 21:15:26 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:03.951 21:15:26 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:03.951 21:15:26 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:03.951 21:15:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:03.951 21:15:26 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:19:03.951 21:15:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:03.951 21:15:26 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:19:03.951 21:15:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:03.951 21:15:26 -- bdev/nbd_common.sh@12 -- # local i 00:19:03.951 21:15:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:03.951 21:15:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:03.951 21:15:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:04.209 [2024-06-07 21:15:26.696976] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:19:04.209 /dev/nbd0 00:19:04.209 21:15:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:04.209 21:15:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:04.209 21:15:26 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:04.209 21:15:26 -- common/autotest_common.sh@857 -- # local i 00:19:04.209 21:15:26 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:04.209 21:15:26 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:04.209 21:15:26 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:04.209 21:15:26 -- common/autotest_common.sh@861 -- # break 00:19:04.209 21:15:26 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:04.209 21:15:26 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:04.209 21:15:26 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:04.209 1+0 records in 00:19:04.209 1+0 records out 00:19:04.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563633 s, 7.3 MB/s 00:19:04.209 21:15:26 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.209 21:15:26 -- common/autotest_common.sh@874 -- # size=4096 00:19:04.209 21:15:26 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.209 21:15:26 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:04.209 21:15:26 -- common/autotest_common.sh@877 -- # return 0 00:19:04.209 21:15:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:04.209 21:15:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:04.209 21:15:26 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:04.209 21:15:26 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:04.209 21:15:26 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:10.770 63488+0 records in 00:19:10.770 63488+0 records out 00:19:10.770 32505856 bytes (33 MB, 31 MiB) copied, 5.68427 s, 5.7 MB/s 00:19:10.770 21:15:32 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@51 -- # local i 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:19:10.770 [2024-06-07 21:15:32.649425] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@41 -- # break 00:19:10.770 21:15:32 -- bdev/nbd_common.sh@45 -- # return 0 00:19:10.770 21:15:32 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:10.770 [2024-06-07 21:15:32.988953] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:10.770 21:15:32 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:10.770 21:15:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:10.770 21:15:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:10.770 21:15:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:10.770 21:15:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:10.770 21:15:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:10.770 21:15:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:10.770 21:15:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:10.770 21:15:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:10.770 21:15:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:10.770 21:15:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.770 21:15:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.770 21:15:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:10.770 "name": "raid_bdev1", 00:19:10.770 "uuid": "a3991b17-f7ad-49b3-9b38-1e1b95cae968", 00:19:10.770 "strip_size_kb": 0, 00:19:10.770 "state": "online", 00:19:10.770 "raid_level": "raid1", 00:19:10.770 "superblock": true, 00:19:10.770 "num_base_bdevs": 2, 00:19:10.770 "num_base_bdevs_discovered": 1, 00:19:10.770 "num_base_bdevs_operational": 1, 00:19:10.770 "base_bdevs_list": [ 00:19:10.770 { 00:19:10.770 "name": null, 00:19:10.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.770 "is_configured": false, 00:19:10.770 "data_offset": 2048, 00:19:10.770 "data_size": 63488 00:19:10.770 }, 00:19:10.770 { 00:19:10.770 "name": "BaseBdev2", 00:19:10.770 "uuid": "9ef7473b-8ed1-5f92-85a5-89738027696b", 00:19:10.770 "is_configured": true, 00:19:10.770 "data_offset": 2048, 00:19:10.770 "data_size": 63488 00:19:10.770 } 00:19:10.770 ] 00:19:10.770 }' 00:19:10.770 21:15:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:10.770 21:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:11.335 21:15:33 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:11.593 [2024-06-07 21:15:34.113360] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:11.593 [2024-06-07 21:15:34.113666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:11.593 [2024-06-07 21:15:34.121051] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4e30 00:19:11.593 [2024-06-07 21:15:34.123421] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:11.593 21:15:34 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:12.525 21:15:35 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.525 21:15:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:12.525 21:15:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:12.526 21:15:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:12.526 21:15:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:12.526 21:15:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.526 21:15:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.783 21:15:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:12.783 "name": "raid_bdev1", 00:19:12.783 "uuid": "a3991b17-f7ad-49b3-9b38-1e1b95cae968", 00:19:12.783 "strip_size_kb": 0, 00:19:12.783 "state": "online", 00:19:12.783 "raid_level": "raid1", 00:19:12.783 "superblock": true, 00:19:12.783 "num_base_bdevs": 2, 00:19:12.783 "num_base_bdevs_discovered": 2, 00:19:12.783 "num_base_bdevs_operational": 2, 00:19:12.783 "process": { 00:19:12.783 "type": "rebuild", 00:19:12.783 "target": "spare", 00:19:12.783 "progress": { 00:19:12.783 "blocks": 24576, 00:19:12.783 "percent": 38 00:19:12.783 } 00:19:12.783 }, 00:19:12.783 "base_bdevs_list": [ 00:19:12.783 { 00:19:12.783 "name": "spare", 00:19:12.783 "uuid": "1220739a-157f-5e2e-9164-23c7ad95d8aa", 00:19:12.783 "is_configured": true, 00:19:12.783 "data_offset": 2048, 00:19:12.783 "data_size": 63488 00:19:12.783 }, 00:19:12.783 { 00:19:12.783 "name": "BaseBdev2", 00:19:12.783 "uuid": "9ef7473b-8ed1-5f92-85a5-89738027696b", 00:19:12.783 "is_configured": true, 00:19:12.783 "data_offset": 2048, 00:19:12.783 "data_size": 63488 00:19:12.783 } 00:19:12.783 ] 00:19:12.783 }' 00:19:12.783 21:15:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:12.783 21:15:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.783 21:15:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:13.041 21:15:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.041 21:15:35 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:13.317 [2024-06-07 21:15:35.722036] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.317 [2024-06-07 21:15:35.735307] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:13.317 [2024-06-07 21:15:35.735634] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.317 21:15:35 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:13.317 21:15:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:13.317 21:15:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:13.317 21:15:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:13.317 21:15:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:13.317 21:15:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:13.317 21:15:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:13.317 21:15:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:13.317 21:15:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:13.317 21:15:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:13.317 21:15:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.317 21:15:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.581 21:15:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:13.581 "name": "raid_bdev1", 00:19:13.581 "uuid": "a3991b17-f7ad-49b3-9b38-1e1b95cae968", 00:19:13.581 "strip_size_kb": 0, 00:19:13.581 "state": "online", 00:19:13.581 "raid_level": "raid1", 00:19:13.581 "superblock": true, 00:19:13.581 "num_base_bdevs": 2, 00:19:13.581 "num_base_bdevs_discovered": 1, 00:19:13.581 "num_base_bdevs_operational": 1, 00:19:13.581 "base_bdevs_list": [ 00:19:13.581 { 00:19:13.581 "name": null, 00:19:13.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.581 "is_configured": false, 00:19:13.581 "data_offset": 2048, 00:19:13.581 "data_size": 63488 00:19:13.581 }, 00:19:13.581 { 00:19:13.581 "name": "BaseBdev2", 00:19:13.581 "uuid": "9ef7473b-8ed1-5f92-85a5-89738027696b", 00:19:13.581 "is_configured": true, 00:19:13.581 "data_offset": 2048, 00:19:13.581 "data_size": 63488 00:19:13.581 } 00:19:13.581 ] 00:19:13.581 }' 00:19:13.581 21:15:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:13.581 21:15:36 -- common/autotest_common.sh@10 -- # set +x 00:19:14.148 21:15:36 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:14.148 21:15:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:14.148 21:15:36 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:14.148 21:15:36 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:14.148 21:15:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:14.148 21:15:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.148 21:15:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.407 21:15:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:14.407 "name": "raid_bdev1", 00:19:14.407 "uuid": "a3991b17-f7ad-49b3-9b38-1e1b95cae968", 00:19:14.407 "strip_size_kb": 0, 00:19:14.407 "state": "online", 00:19:14.407 "raid_level": "raid1", 00:19:14.407 "superblock": true, 00:19:14.407 "num_base_bdevs": 2, 00:19:14.407 "num_base_bdevs_discovered": 1, 00:19:14.407 "num_base_bdevs_operational": 1, 00:19:14.407 "base_bdevs_list": [ 00:19:14.407 { 00:19:14.407 "name": null, 00:19:14.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.407 "is_configured": false, 00:19:14.407 "data_offset": 2048, 00:19:14.407 "data_size": 63488 00:19:14.407 }, 00:19:14.407 { 00:19:14.407 "name": "BaseBdev2", 00:19:14.407 "uuid": "9ef7473b-8ed1-5f92-85a5-89738027696b", 00:19:14.407 "is_configured": true, 00:19:14.407 "data_offset": 2048, 00:19:14.407 "data_size": 63488 00:19:14.407 } 00:19:14.407 ] 00:19:14.407 }' 00:19:14.407 21:15:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:14.407 21:15:36 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:14.407 21:15:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:14.407 21:15:36 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:14.407 21:15:36 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:14.665 [2024-06-07 21:15:37.191147] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:14.665 [2024-06-07 21:15:37.191394] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:14.665 [2024-06-07 21:15:37.198917] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4fd0 00:19:14.665 [2024-06-07 21:15:37.201667] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:14.665 21:15:37 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:15.599 21:15:38 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.599 21:15:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:15.599 21:15:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:15.599 21:15:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:15.599 21:15:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:15.599 21:15:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.599 21:15:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.857 21:15:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:15.857 "name": "raid_bdev1", 00:19:15.857 "uuid": "a3991b17-f7ad-49b3-9b38-1e1b95cae968", 00:19:15.857 "strip_size_kb": 0, 00:19:15.857 "state": "online", 00:19:15.857 "raid_level": "raid1", 00:19:15.857 "superblock": true, 00:19:15.857 "num_base_bdevs": 2, 00:19:15.857 "num_base_bdevs_discovered": 2, 00:19:15.857 "num_base_bdevs_operational": 2, 00:19:15.857 "process": { 00:19:15.857 "type": "rebuild", 00:19:15.857 "target": "spare", 00:19:15.857 "progress": { 00:19:15.857 "blocks": 24576, 00:19:15.857 "percent": 38 00:19:15.857 } 00:19:15.857 }, 00:19:15.857 "base_bdevs_list": [ 00:19:15.857 { 00:19:15.857 "name": "spare", 00:19:15.857 "uuid": "1220739a-157f-5e2e-9164-23c7ad95d8aa", 00:19:15.857 "is_configured": true, 00:19:15.857 "data_offset": 2048, 00:19:15.857 "data_size": 63488 00:19:15.857 }, 00:19:15.857 { 00:19:15.857 "name": "BaseBdev2", 00:19:15.857 "uuid": "9ef7473b-8ed1-5f92-85a5-89738027696b", 00:19:15.857 "is_configured": true, 00:19:15.857 "data_offset": 2048, 00:19:15.857 "data_size": 63488 00:19:15.857 } 00:19:15.857 ] 00:19:15.857 }' 00:19:15.857 21:15:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:19:16.116 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@657 -- # local timeout=398 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.116 21:15:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.375 21:15:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:16.375 "name": "raid_bdev1", 00:19:16.375 "uuid": "a3991b17-f7ad-49b3-9b38-1e1b95cae968", 00:19:16.375 "strip_size_kb": 0, 00:19:16.375 "state": "online", 00:19:16.375 "raid_level": "raid1", 00:19:16.375 "superblock": true, 00:19:16.375 "num_base_bdevs": 2, 00:19:16.375 "num_base_bdevs_discovered": 2, 00:19:16.375 "num_base_bdevs_operational": 2, 00:19:16.375 "process": { 00:19:16.375 "type": "rebuild", 00:19:16.375 "target": "spare", 00:19:16.375 "progress": { 00:19:16.375 "blocks": 32768, 00:19:16.375 "percent": 51 00:19:16.375 } 00:19:16.375 }, 00:19:16.375 "base_bdevs_list": [ 00:19:16.375 { 00:19:16.375 "name": "spare", 00:19:16.375 "uuid": "1220739a-157f-5e2e-9164-23c7ad95d8aa", 00:19:16.375 "is_configured": true, 00:19:16.375 "data_offset": 2048, 00:19:16.375 "data_size": 63488 00:19:16.375 }, 00:19:16.375 { 00:19:16.375 "name": "BaseBdev2", 00:19:16.375 "uuid": "9ef7473b-8ed1-5f92-85a5-89738027696b", 00:19:16.375 "is_configured": true, 00:19:16.375 "data_offset": 2048, 00:19:16.375 "data_size": 63488 00:19:16.375 } 00:19:16.375 ] 00:19:16.375 }' 00:19:16.375 21:15:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:16.375 21:15:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.375 21:15:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:16.375 21:15:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.375 21:15:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:17.311 21:15:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:17.311 21:15:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.311 21:15:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:17.311 21:15:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:17.311 21:15:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:17.311 21:15:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:17.311 21:15:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.311 21:15:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.570 21:15:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:17.570 "name": "raid_bdev1", 00:19:17.570 "uuid": "a3991b17-f7ad-49b3-9b38-1e1b95cae968", 00:19:17.570 "strip_size_kb": 0, 00:19:17.570 "state": "online", 00:19:17.570 "raid_level": "raid1", 00:19:17.570 "superblock": true, 00:19:17.570 "num_base_bdevs": 2, 00:19:17.570 "num_base_bdevs_discovered": 2, 00:19:17.570 "num_base_bdevs_operational": 2, 00:19:17.570 "process": { 00:19:17.570 "type": "rebuild", 00:19:17.570 "target": "spare", 00:19:17.570 "progress": { 00:19:17.570 "blocks": 59392, 00:19:17.570 "percent": 93 00:19:17.570 } 00:19:17.570 }, 00:19:17.570 "base_bdevs_list": [ 00:19:17.570 { 00:19:17.570 "name": "spare", 00:19:17.570 "uuid": "1220739a-157f-5e2e-9164-23c7ad95d8aa", 00:19:17.570 "is_configured": true, 00:19:17.570 "data_offset": 2048, 00:19:17.570 "data_size": 63488 00:19:17.570 }, 00:19:17.570 { 00:19:17.570 "name": "BaseBdev2", 00:19:17.570 "uuid": "9ef7473b-8ed1-5f92-85a5-89738027696b", 00:19:17.570 "is_configured": true, 00:19:17.570 "data_offset": 2048, 00:19:17.570 "data_size": 63488 00:19:17.570 } 00:19:17.570 ] 00:19:17.570 }' 00:19:17.570 21:15:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:17.828 21:15:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.828 21:15:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:17.828 [2024-06-07 21:15:40.322712] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:17.828 [2024-06-07 21:15:40.323062] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:17.828 [2024-06-07 21:15:40.323394] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.828 21:15:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.828 21:15:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:18.764 21:15:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:18.764 21:15:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:18.764 21:15:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:18.764 21:15:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:18.764 21:15:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:18.764 21:15:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:18.764 21:15:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.764 21:15:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.022 21:15:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:19.022 "name": "raid_bdev1", 00:19:19.022 "uuid": "a3991b17-f7ad-49b3-9b38-1e1b95cae968", 00:19:19.022 "strip_size_kb": 0, 00:19:19.022 "state": "online", 00:19:19.022 "raid_level": "raid1", 00:19:19.022 "superblock": true, 00:19:19.022 "num_base_bdevs": 2, 00:19:19.022 "num_base_bdevs_discovered": 2, 00:19:19.022 "num_base_bdevs_operational": 2, 00:19:19.022 "base_bdevs_list": [ 00:19:19.022 { 00:19:19.022 "name": "spare", 00:19:19.022 "uuid": "1220739a-157f-5e2e-9164-23c7ad95d8aa", 00:19:19.022 "is_configured": true, 00:19:19.022 "data_offset": 2048, 00:19:19.022 "data_size": 63488 00:19:19.022 }, 00:19:19.022 { 00:19:19.022 "name": "BaseBdev2", 00:19:19.022 "uuid": "9ef7473b-8ed1-5f92-85a5-89738027696b", 00:19:19.022 "is_configured": true, 00:19:19.022 "data_offset": 2048, 00:19:19.022 "data_size": 63488 00:19:19.022 } 00:19:19.022 ] 00:19:19.022 }' 00:19:19.022 21:15:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:19.022 21:15:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:19.022 21:15:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:19.022 21:15:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:19.022 21:15:41 -- bdev/bdev_raid.sh@660 -- # break 00:19:19.022 21:15:41 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:19.022 21:15:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:19.022 21:15:41 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:19.022 21:15:41 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:19.022 21:15:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:19.022 21:15:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.022 21:15:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.280 21:15:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:19.280 "name": "raid_bdev1", 00:19:19.280 "uuid": "a3991b17-f7ad-49b3-9b38-1e1b95cae968", 00:19:19.281 "strip_size_kb": 0, 00:19:19.281 "state": "online", 00:19:19.281 "raid_level": "raid1", 00:19:19.281 "superblock": true, 00:19:19.281 "num_base_bdevs": 2, 00:19:19.281 "num_base_bdevs_discovered": 2, 00:19:19.281 "num_base_bdevs_operational": 2, 00:19:19.281 "base_bdevs_list": [ 00:19:19.281 { 00:19:19.281 "name": "spare", 00:19:19.281 "uuid": "1220739a-157f-5e2e-9164-23c7ad95d8aa", 00:19:19.281 "is_configured": true, 00:19:19.281 "data_offset": 2048, 00:19:19.281 "data_size": 63488 00:19:19.281 }, 00:19:19.281 { 00:19:19.281 "name": "BaseBdev2", 00:19:19.281 "uuid": "9ef7473b-8ed1-5f92-85a5-89738027696b", 00:19:19.281 "is_configured": true, 00:19:19.281 "data_offset": 2048, 00:19:19.281 "data_size": 63488 00:19:19.281 } 00:19:19.281 ] 00:19:19.281 }' 00:19:19.281 21:15:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:19.538 21:15:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:19.538 21:15:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:19.538 21:15:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:19.538 21:15:42 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:19.538 21:15:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:19.538 21:15:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:19.538 21:15:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:19.539 21:15:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:19.539 21:15:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:19.539 21:15:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:19.539 21:15:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:19.539 21:15:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:19.539 21:15:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:19.539 21:15:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.539 21:15:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.797 21:15:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:19.797 "name": "raid_bdev1", 00:19:19.797 "uuid": "a3991b17-f7ad-49b3-9b38-1e1b95cae968", 00:19:19.797 "strip_size_kb": 0, 00:19:19.797 "state": "online", 00:19:19.797 "raid_level": "raid1", 00:19:19.797 "superblock": true, 00:19:19.797 "num_base_bdevs": 2, 00:19:19.797 "num_base_bdevs_discovered": 2, 00:19:19.797 "num_base_bdevs_operational": 2, 00:19:19.797 "base_bdevs_list": [ 00:19:19.797 { 00:19:19.797 "name": "spare", 00:19:19.797 "uuid": "1220739a-157f-5e2e-9164-23c7ad95d8aa", 00:19:19.797 "is_configured": true, 00:19:19.797 "data_offset": 2048, 00:19:19.797 "data_size": 63488 00:19:19.797 }, 00:19:19.797 { 00:19:19.797 "name": "BaseBdev2", 00:19:19.797 "uuid": "9ef7473b-8ed1-5f92-85a5-89738027696b", 00:19:19.797 "is_configured": true, 00:19:19.797 "data_offset": 2048, 00:19:19.797 "data_size": 63488 00:19:19.797 } 00:19:19.797 ] 00:19:19.797 }' 00:19:19.797 21:15:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:19.797 21:15:42 -- common/autotest_common.sh@10 -- # set +x 00:19:20.363 21:15:43 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:20.620 [2024-06-07 21:15:43.254353] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.620 [2024-06-07 21:15:43.254594] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.620 [2024-06-07 21:15:43.254869] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.620 [2024-06-07 21:15:43.255165] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.620 [2024-06-07 21:15:43.255325] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:19:20.620 21:15:43 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.620 21:15:43 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:20.878 21:15:43 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:20.878 21:15:43 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:20.878 21:15:43 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:20.878 21:15:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:20.878 21:15:43 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:19:20.878 21:15:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:20.878 21:15:43 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:19:20.878 21:15:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:20.878 21:15:43 -- bdev/nbd_common.sh@12 -- # local i 00:19:20.878 21:15:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:20.878 21:15:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:20.878 21:15:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:21.136 /dev/nbd0 00:19:21.136 21:15:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:21.136 21:15:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:21.136 21:15:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:21.136 21:15:43 -- common/autotest_common.sh@857 -- # local i 00:19:21.136 21:15:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:21.136 21:15:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:21.136 21:15:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:21.136 21:15:43 -- common/autotest_common.sh@861 -- # break 00:19:21.136 21:15:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:21.136 21:15:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:21.136 21:15:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:21.136 1+0 records in 00:19:21.136 1+0 records out 00:19:21.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453783 s, 9.0 MB/s 00:19:21.136 21:15:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.136 21:15:43 -- common/autotest_common.sh@874 -- # size=4096 00:19:21.136 21:15:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.136 21:15:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:21.136 21:15:43 -- common/autotest_common.sh@877 -- # return 0 00:19:21.136 21:15:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:21.136 21:15:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:21.136 21:15:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:21.399 /dev/nbd1 00:19:21.399 21:15:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:21.399 21:15:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:21.399 21:15:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:19:21.399 21:15:43 -- common/autotest_common.sh@857 -- # local i 00:19:21.399 21:15:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:21.399 21:15:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:21.399 21:15:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:19:21.399 21:15:43 -- common/autotest_common.sh@861 -- # break 00:19:21.399 21:15:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:21.399 21:15:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:21.399 21:15:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:21.399 1+0 records in 00:19:21.399 1+0 records out 00:19:21.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676253 s, 6.1 MB/s 00:19:21.399 21:15:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.399 21:15:44 -- common/autotest_common.sh@874 -- # size=4096 00:19:21.399 21:15:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:21.399 21:15:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:21.399 21:15:44 -- common/autotest_common.sh@877 -- # return 0 00:19:21.399 21:15:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:21.399 21:15:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:21.399 21:15:44 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:21.663 21:15:44 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:21.663 21:15:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:21.663 21:15:44 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:19:21.663 21:15:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:21.663 21:15:44 -- bdev/nbd_common.sh@51 -- # local i 00:19:21.663 21:15:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:21.663 21:15:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:21.921 21:15:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:21.921 21:15:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:21.921 21:15:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:21.921 21:15:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:21.921 21:15:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:21.921 21:15:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:21.921 21:15:44 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:19:21.921 21:15:44 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:19:21.921 21:15:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:21.921 21:15:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:21.921 21:15:44 -- bdev/nbd_common.sh@41 -- # break 00:19:21.921 21:15:44 -- bdev/nbd_common.sh@45 -- # return 0 00:19:21.921 21:15:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:21.921 21:15:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:22.179 21:15:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:22.179 21:15:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:22.179 21:15:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:22.179 21:15:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:22.179 21:15:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:22.179 21:15:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:22.180 21:15:44 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:19:22.437 21:15:44 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:19:22.437 21:15:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:22.437 21:15:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:22.437 21:15:44 -- bdev/nbd_common.sh@41 -- # break 00:19:22.437 21:15:44 -- bdev/nbd_common.sh@45 -- # return 0 00:19:22.437 21:15:44 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:19:22.437 21:15:44 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:22.437 21:15:44 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:19:22.437 21:15:44 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:22.696 21:15:45 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:22.696 [2024-06-07 21:15:45.356044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:22.696 [2024-06-07 21:15:45.356359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.696 [2024-06-07 21:15:45.356443] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:22.696 [2024-06-07 21:15:45.356642] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.696 [2024-06-07 21:15:45.359288] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.696 [2024-06-07 21:15:45.359502] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:22.696 [2024-06-07 21:15:45.359719] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:22.696 [2024-06-07 21:15:45.359950] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:22.696 BaseBdev1 00:19:22.696 21:15:45 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:22.696 21:15:45 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:19:22.696 21:15:45 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:19:22.954 21:15:45 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:23.213 [2024-06-07 21:15:45.816348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:23.213 [2024-06-07 21:15:45.816678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.213 [2024-06-07 21:15:45.816766] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:23.213 [2024-06-07 21:15:45.816928] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.213 [2024-06-07 21:15:45.817531] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.213 [2024-06-07 21:15:45.817735] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:23.213 [2024-06-07 21:15:45.817973] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:19:23.213 [2024-06-07 21:15:45.818099] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:19:23.213 [2024-06-07 21:15:45.818205] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.213 [2024-06-07 21:15:45.818294] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:19:23.213 [2024-06-07 21:15:45.818507] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:23.213 BaseBdev2 00:19:23.213 21:15:45 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:23.472 21:15:46 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:23.731 [2024-06-07 21:15:46.228474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:23.731 [2024-06-07 21:15:46.228689] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.731 [2024-06-07 21:15:46.228921] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:23.731 [2024-06-07 21:15:46.229095] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.731 [2024-06-07 21:15:46.229843] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.731 [2024-06-07 21:15:46.230066] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:23.731 [2024-06-07 21:15:46.230272] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:19:23.731 [2024-06-07 21:15:46.230438] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:23.731 spare 00:19:23.731 21:15:46 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:23.731 21:15:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:23.731 21:15:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:23.731 21:15:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:23.731 21:15:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:23.731 21:15:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:23.731 21:15:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:23.731 21:15:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:23.731 21:15:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:23.731 21:15:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:23.731 21:15:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.731 21:15:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.731 [2024-06-07 21:15:46.330610] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:19:23.731 [2024-06-07 21:15:46.330771] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:23.731 [2024-06-07 21:15:46.330989] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5b10 00:19:23.731 [2024-06-07 21:15:46.331666] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:19:23.731 [2024-06-07 21:15:46.331812] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:19:23.731 [2024-06-07 21:15:46.332114] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.989 21:15:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:23.989 "name": "raid_bdev1", 00:19:23.989 "uuid": "a3991b17-f7ad-49b3-9b38-1e1b95cae968", 00:19:23.989 "strip_size_kb": 0, 00:19:23.989 "state": "online", 00:19:23.989 "raid_level": "raid1", 00:19:23.989 "superblock": true, 00:19:23.989 "num_base_bdevs": 2, 00:19:23.989 "num_base_bdevs_discovered": 2, 00:19:23.989 "num_base_bdevs_operational": 2, 00:19:23.989 "base_bdevs_list": [ 00:19:23.989 { 00:19:23.989 "name": "spare", 00:19:23.989 "uuid": "1220739a-157f-5e2e-9164-23c7ad95d8aa", 00:19:23.989 "is_configured": true, 00:19:23.989 "data_offset": 2048, 00:19:23.989 "data_size": 63488 00:19:23.989 }, 00:19:23.989 { 00:19:23.989 "name": "BaseBdev2", 00:19:23.989 "uuid": "9ef7473b-8ed1-5f92-85a5-89738027696b", 00:19:23.989 "is_configured": true, 00:19:23.989 "data_offset": 2048, 00:19:23.989 "data_size": 63488 00:19:23.989 } 00:19:23.989 ] 00:19:23.989 }' 00:19:23.989 21:15:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:23.989 21:15:46 -- common/autotest_common.sh@10 -- # set +x 00:19:24.555 21:15:47 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.556 21:15:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:24.556 21:15:47 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:24.556 21:15:47 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:24.556 21:15:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:24.556 21:15:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.556 21:15:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.814 21:15:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:24.814 "name": "raid_bdev1", 00:19:24.814 "uuid": "a3991b17-f7ad-49b3-9b38-1e1b95cae968", 00:19:24.814 "strip_size_kb": 0, 00:19:24.814 "state": "online", 00:19:24.814 "raid_level": "raid1", 00:19:24.814 "superblock": true, 00:19:24.814 "num_base_bdevs": 2, 00:19:24.814 "num_base_bdevs_discovered": 2, 00:19:24.814 "num_base_bdevs_operational": 2, 00:19:24.814 "base_bdevs_list": [ 00:19:24.814 { 00:19:24.814 "name": "spare", 00:19:24.814 "uuid": "1220739a-157f-5e2e-9164-23c7ad95d8aa", 00:19:24.814 "is_configured": true, 00:19:24.814 "data_offset": 2048, 00:19:24.814 "data_size": 63488 00:19:24.814 }, 00:19:24.814 { 00:19:24.814 "name": "BaseBdev2", 00:19:24.814 "uuid": "9ef7473b-8ed1-5f92-85a5-89738027696b", 00:19:24.814 "is_configured": true, 00:19:24.814 "data_offset": 2048, 00:19:24.814 "data_size": 63488 00:19:24.814 } 00:19:24.814 ] 00:19:24.814 }' 00:19:24.814 21:15:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:24.814 21:15:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:24.814 21:15:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:25.073 21:15:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:25.073 21:15:47 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.073 21:15:47 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:25.332 21:15:47 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.332 21:15:47 -- bdev/bdev_raid.sh@709 -- # killprocess 136473 00:19:25.332 21:15:47 -- common/autotest_common.sh@926 -- # '[' -z 136473 ']' 00:19:25.332 21:15:47 -- common/autotest_common.sh@930 -- # kill -0 136473 00:19:25.332 21:15:47 -- common/autotest_common.sh@931 -- # uname 00:19:25.332 21:15:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:25.332 21:15:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136473 00:19:25.332 killing process with pid 136473 00:19:25.332 Received shutdown signal, test time was about 60.000000 seconds 00:19:25.332 00:19:25.332 Latency(us) 00:19:25.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.332 =================================================================================================================== 00:19:25.332 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.332 21:15:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:25.332 21:15:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:25.332 21:15:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136473' 00:19:25.332 21:15:47 -- common/autotest_common.sh@945 -- # kill 136473 00:19:25.332 21:15:47 -- common/autotest_common.sh@950 -- # wait 136473 00:19:25.332 [2024-06-07 21:15:47.809404] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.332 [2024-06-07 21:15:47.809588] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.332 [2024-06-07 21:15:47.809738] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.332 [2024-06-07 21:15:47.809799] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:19:25.332 [2024-06-07 21:15:47.846207] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:25.591 00:19:25.591 real 0m25.775s 00:19:25.591 user 0m37.219s 00:19:25.591 sys 0m4.658s 00:19:25.591 ************************************ 00:19:25.591 END TEST raid_rebuild_test_sb 00:19:25.591 ************************************ 00:19:25.591 21:15:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.591 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:19:25.591 21:15:48 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:19:25.591 21:15:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:25.591 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:25.591 ************************************ 00:19:25.591 START TEST raid_rebuild_test_io 00:19:25.591 ************************************ 00:19:25.591 21:15:48 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@544 -- # raid_pid=137161 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:25.591 21:15:48 -- bdev/bdev_raid.sh@545 -- # waitforlisten 137161 /var/tmp/spdk-raid.sock 00:19:25.591 21:15:48 -- common/autotest_common.sh@819 -- # '[' -z 137161 ']' 00:19:25.591 21:15:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:25.591 21:15:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:25.591 21:15:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:25.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:25.591 21:15:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:25.591 21:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:25.850 [2024-06-07 21:15:48.295719] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:25.850 [2024-06-07 21:15:48.296243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137161 ] 00:19:25.850 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:25.850 Zero copy mechanism will not be used. 00:19:25.850 [2024-06-07 21:15:48.453736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.108 [2024-06-07 21:15:48.538989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.108 [2024-06-07 21:15:48.616069] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.675 21:15:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:26.675 21:15:49 -- common/autotest_common.sh@852 -- # return 0 00:19:26.675 21:15:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:26.675 21:15:49 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:26.675 21:15:49 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:26.933 BaseBdev1 00:19:26.933 21:15:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:26.933 21:15:49 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:26.933 21:15:49 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:27.192 BaseBdev2 00:19:27.192 21:15:49 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:27.451 spare_malloc 00:19:27.451 21:15:50 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:27.710 spare_delay 00:19:27.710 21:15:50 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:27.968 [2024-06-07 21:15:50.463322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:27.968 [2024-06-07 21:15:50.463672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.968 [2024-06-07 21:15:50.463751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:27.968 [2024-06-07 21:15:50.464008] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.968 [2024-06-07 21:15:50.466725] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.968 [2024-06-07 21:15:50.466909] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:27.968 spare 00:19:27.968 21:15:50 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:28.227 [2024-06-07 21:15:50.667466] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:28.227 [2024-06-07 21:15:50.669676] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:28.227 [2024-06-07 21:15:50.669916] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:19:28.227 [2024-06-07 21:15:50.669962] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:28.227 [2024-06-07 21:15:50.670295] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:28.227 [2024-06-07 21:15:50.670882] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:19:28.227 [2024-06-07 21:15:50.671033] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:19:28.227 [2024-06-07 21:15:50.671423] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.227 21:15:50 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:28.227 21:15:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:28.227 21:15:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:28.227 21:15:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:28.227 21:15:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:28.227 21:15:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:28.227 21:15:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.227 21:15:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.227 21:15:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.227 21:15:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.227 21:15:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.227 21:15:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.227 21:15:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.227 "name": "raid_bdev1", 00:19:28.227 "uuid": "b00a0f9c-d0aa-4e22-8f5e-779fb7acbe1e", 00:19:28.227 "strip_size_kb": 0, 00:19:28.227 "state": "online", 00:19:28.227 "raid_level": "raid1", 00:19:28.227 "superblock": false, 00:19:28.227 "num_base_bdevs": 2, 00:19:28.227 "num_base_bdevs_discovered": 2, 00:19:28.227 "num_base_bdevs_operational": 2, 00:19:28.227 "base_bdevs_list": [ 00:19:28.227 { 00:19:28.227 "name": "BaseBdev1", 00:19:28.227 "uuid": "a17488d4-69cc-46a6-af92-3048550e6d0e", 00:19:28.227 "is_configured": true, 00:19:28.227 "data_offset": 0, 00:19:28.227 "data_size": 65536 00:19:28.227 }, 00:19:28.227 { 00:19:28.227 "name": "BaseBdev2", 00:19:28.227 "uuid": "e13e983a-d9cb-4ec2-8585-1ba7e1129df4", 00:19:28.227 "is_configured": true, 00:19:28.227 "data_offset": 0, 00:19:28.227 "data_size": 65536 00:19:28.227 } 00:19:28.227 ] 00:19:28.227 }' 00:19:28.227 21:15:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.227 21:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:29.192 21:15:51 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:29.192 21:15:51 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:29.192 [2024-06-07 21:15:51.780050] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.192 21:15:51 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:29.192 21:15:51 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.192 21:15:51 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:29.450 21:15:52 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:29.450 21:15:52 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:19:29.450 21:15:52 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:29.450 21:15:52 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:29.450 [2024-06-07 21:15:52.115373] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:29.450 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:29.450 Zero copy mechanism will not be used. 00:19:29.450 Running I/O for 60 seconds... 00:19:29.709 [2024-06-07 21:15:52.240610] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:29.709 [2024-06-07 21:15:52.247892] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005790 00:19:29.709 21:15:52 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:29.709 21:15:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:29.709 21:15:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:29.709 21:15:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:29.709 21:15:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:29.709 21:15:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:29.709 21:15:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:29.709 21:15:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:29.709 21:15:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:29.709 21:15:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:29.709 21:15:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.709 21:15:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.968 21:15:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:29.968 "name": "raid_bdev1", 00:19:29.968 "uuid": "b00a0f9c-d0aa-4e22-8f5e-779fb7acbe1e", 00:19:29.968 "strip_size_kb": 0, 00:19:29.968 "state": "online", 00:19:29.968 "raid_level": "raid1", 00:19:29.968 "superblock": false, 00:19:29.968 "num_base_bdevs": 2, 00:19:29.968 "num_base_bdevs_discovered": 1, 00:19:29.968 "num_base_bdevs_operational": 1, 00:19:29.968 "base_bdevs_list": [ 00:19:29.968 { 00:19:29.968 "name": null, 00:19:29.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.968 "is_configured": false, 00:19:29.968 "data_offset": 0, 00:19:29.968 "data_size": 65536 00:19:29.968 }, 00:19:29.968 { 00:19:29.968 "name": "BaseBdev2", 00:19:29.968 "uuid": "e13e983a-d9cb-4ec2-8585-1ba7e1129df4", 00:19:29.968 "is_configured": true, 00:19:29.968 "data_offset": 0, 00:19:29.969 "data_size": 65536 00:19:29.969 } 00:19:29.969 ] 00:19:29.969 }' 00:19:29.969 21:15:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:29.969 21:15:52 -- common/autotest_common.sh@10 -- # set +x 00:19:30.534 21:15:53 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:30.793 [2024-06-07 21:15:53.429093] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:30.793 [2024-06-07 21:15:53.429360] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:30.793 21:15:53 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:31.050 [2024-06-07 21:15:53.469883] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:31.050 [2024-06-07 21:15:53.472697] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:31.050 [2024-06-07 21:15:53.582101] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:31.050 [2024-06-07 21:15:53.582555] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:31.308 [2024-06-07 21:15:53.801028] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:31.308 [2024-06-07 21:15:53.801575] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:31.565 [2024-06-07 21:15:54.134665] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:31.565 [2024-06-07 21:15:54.135178] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:31.823 [2024-06-07 21:15:54.245506] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:31.823 21:15:54 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.823 21:15:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:31.823 21:15:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:31.823 21:15:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:31.823 21:15:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:31.823 [2024-06-07 21:15:54.469709] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:31.823 [2024-06-07 21:15:54.470176] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:31.823 21:15:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.823 21:15:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.080 [2024-06-07 21:15:54.672877] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:32.080 21:15:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:32.080 "name": "raid_bdev1", 00:19:32.080 "uuid": "b00a0f9c-d0aa-4e22-8f5e-779fb7acbe1e", 00:19:32.080 "strip_size_kb": 0, 00:19:32.080 "state": "online", 00:19:32.080 "raid_level": "raid1", 00:19:32.080 "superblock": false, 00:19:32.081 "num_base_bdevs": 2, 00:19:32.081 "num_base_bdevs_discovered": 2, 00:19:32.081 "num_base_bdevs_operational": 2, 00:19:32.081 "process": { 00:19:32.081 "type": "rebuild", 00:19:32.081 "target": "spare", 00:19:32.081 "progress": { 00:19:32.081 "blocks": 16384, 00:19:32.081 "percent": 25 00:19:32.081 } 00:19:32.081 }, 00:19:32.081 "base_bdevs_list": [ 00:19:32.081 { 00:19:32.081 "name": "spare", 00:19:32.081 "uuid": "5f7730aa-a4ef-5a2e-b08a-4a1f863c639c", 00:19:32.081 "is_configured": true, 00:19:32.081 "data_offset": 0, 00:19:32.081 "data_size": 65536 00:19:32.081 }, 00:19:32.081 { 00:19:32.081 "name": "BaseBdev2", 00:19:32.081 "uuid": "e13e983a-d9cb-4ec2-8585-1ba7e1129df4", 00:19:32.081 "is_configured": true, 00:19:32.081 "data_offset": 0, 00:19:32.081 "data_size": 65536 00:19:32.081 } 00:19:32.081 ] 00:19:32.081 }' 00:19:32.081 21:15:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:32.339 21:15:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:32.339 21:15:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:32.339 21:15:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:32.339 21:15:54 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:32.339 [2024-06-07 21:15:54.907999] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:32.339 [2024-06-07 21:15:54.908902] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:32.597 [2024-06-07 21:15:55.074728] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:32.597 [2024-06-07 21:15:55.126926] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:32.597 [2024-06-07 21:15:55.239975] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:32.597 [2024-06-07 21:15:55.249382] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.597 [2024-06-07 21:15:55.265804] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005790 00:19:32.855 21:15:55 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:32.855 21:15:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:32.855 21:15:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:32.855 21:15:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:32.855 21:15:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:32.855 21:15:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:32.855 21:15:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:32.855 21:15:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:32.855 21:15:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:32.855 21:15:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:32.855 21:15:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.855 21:15:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.855 21:15:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:32.855 "name": "raid_bdev1", 00:19:32.855 "uuid": "b00a0f9c-d0aa-4e22-8f5e-779fb7acbe1e", 00:19:32.855 "strip_size_kb": 0, 00:19:32.855 "state": "online", 00:19:32.855 "raid_level": "raid1", 00:19:32.855 "superblock": false, 00:19:32.855 "num_base_bdevs": 2, 00:19:32.855 "num_base_bdevs_discovered": 1, 00:19:32.855 "num_base_bdevs_operational": 1, 00:19:32.855 "base_bdevs_list": [ 00:19:32.855 { 00:19:32.855 "name": null, 00:19:32.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.855 "is_configured": false, 00:19:32.855 "data_offset": 0, 00:19:32.855 "data_size": 65536 00:19:32.855 }, 00:19:32.855 { 00:19:32.855 "name": "BaseBdev2", 00:19:32.855 "uuid": "e13e983a-d9cb-4ec2-8585-1ba7e1129df4", 00:19:32.855 "is_configured": true, 00:19:32.855 "data_offset": 0, 00:19:32.855 "data_size": 65536 00:19:32.855 } 00:19:32.855 ] 00:19:32.855 }' 00:19:32.855 21:15:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:32.855 21:15:55 -- common/autotest_common.sh@10 -- # set +x 00:19:33.790 21:15:56 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:33.790 21:15:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:33.790 21:15:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:33.790 21:15:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:33.790 21:15:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:33.790 21:15:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.790 21:15:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.048 21:15:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:34.048 "name": "raid_bdev1", 00:19:34.048 "uuid": "b00a0f9c-d0aa-4e22-8f5e-779fb7acbe1e", 00:19:34.048 "strip_size_kb": 0, 00:19:34.048 "state": "online", 00:19:34.048 "raid_level": "raid1", 00:19:34.048 "superblock": false, 00:19:34.048 "num_base_bdevs": 2, 00:19:34.048 "num_base_bdevs_discovered": 1, 00:19:34.048 "num_base_bdevs_operational": 1, 00:19:34.048 "base_bdevs_list": [ 00:19:34.048 { 00:19:34.048 "name": null, 00:19:34.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.048 "is_configured": false, 00:19:34.048 "data_offset": 0, 00:19:34.048 "data_size": 65536 00:19:34.048 }, 00:19:34.048 { 00:19:34.048 "name": "BaseBdev2", 00:19:34.048 "uuid": "e13e983a-d9cb-4ec2-8585-1ba7e1129df4", 00:19:34.048 "is_configured": true, 00:19:34.048 "data_offset": 0, 00:19:34.048 "data_size": 65536 00:19:34.048 } 00:19:34.048 ] 00:19:34.048 }' 00:19:34.048 21:15:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:34.048 21:15:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:34.048 21:15:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:34.048 21:15:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:34.048 21:15:56 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:34.306 [2024-06-07 21:15:56.761999] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:34.306 [2024-06-07 21:15:56.762336] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:34.306 21:15:56 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:34.306 [2024-06-07 21:15:56.809013] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:19:34.306 [2024-06-07 21:15:56.811381] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:34.306 [2024-06-07 21:15:56.920034] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:34.306 [2024-06-07 21:15:56.920731] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:34.564 [2024-06-07 21:15:57.137347] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:34.564 [2024-06-07 21:15:57.137716] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:34.822 [2024-06-07 21:15:57.370639] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:34.822 [2024-06-07 21:15:57.371481] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:35.080 [2024-06-07 21:15:57.581085] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:35.338 21:15:57 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.338 21:15:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:35.338 21:15:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:35.338 21:15:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:35.338 21:15:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:35.338 21:15:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.338 21:15:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:35.597 "name": "raid_bdev1", 00:19:35.597 "uuid": "b00a0f9c-d0aa-4e22-8f5e-779fb7acbe1e", 00:19:35.597 "strip_size_kb": 0, 00:19:35.597 "state": "online", 00:19:35.597 "raid_level": "raid1", 00:19:35.597 "superblock": false, 00:19:35.597 "num_base_bdevs": 2, 00:19:35.597 "num_base_bdevs_discovered": 2, 00:19:35.597 "num_base_bdevs_operational": 2, 00:19:35.597 "process": { 00:19:35.597 "type": "rebuild", 00:19:35.597 "target": "spare", 00:19:35.597 "progress": { 00:19:35.597 "blocks": 14336, 00:19:35.597 "percent": 21 00:19:35.597 } 00:19:35.597 }, 00:19:35.597 "base_bdevs_list": [ 00:19:35.597 { 00:19:35.597 "name": "spare", 00:19:35.597 "uuid": "5f7730aa-a4ef-5a2e-b08a-4a1f863c639c", 00:19:35.597 "is_configured": true, 00:19:35.597 "data_offset": 0, 00:19:35.597 "data_size": 65536 00:19:35.597 }, 00:19:35.597 { 00:19:35.597 "name": "BaseBdev2", 00:19:35.597 "uuid": "e13e983a-d9cb-4ec2-8585-1ba7e1129df4", 00:19:35.597 "is_configured": true, 00:19:35.597 "data_offset": 0, 00:19:35.597 "data_size": 65536 00:19:35.597 } 00:19:35.597 ] 00:19:35.597 }' 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@657 -- # local timeout=418 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.597 21:15:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.855 [2024-06-07 21:15:58.414776] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:35.855 21:15:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:35.855 "name": "raid_bdev1", 00:19:35.855 "uuid": "b00a0f9c-d0aa-4e22-8f5e-779fb7acbe1e", 00:19:35.855 "strip_size_kb": 0, 00:19:35.855 "state": "online", 00:19:35.855 "raid_level": "raid1", 00:19:35.855 "superblock": false, 00:19:35.855 "num_base_bdevs": 2, 00:19:35.855 "num_base_bdevs_discovered": 2, 00:19:35.855 "num_base_bdevs_operational": 2, 00:19:35.855 "process": { 00:19:35.855 "type": "rebuild", 00:19:35.855 "target": "spare", 00:19:35.855 "progress": { 00:19:35.855 "blocks": 20480, 00:19:35.855 "percent": 31 00:19:35.856 } 00:19:35.856 }, 00:19:35.856 "base_bdevs_list": [ 00:19:35.856 { 00:19:35.856 "name": "spare", 00:19:35.856 "uuid": "5f7730aa-a4ef-5a2e-b08a-4a1f863c639c", 00:19:35.856 "is_configured": true, 00:19:35.856 "data_offset": 0, 00:19:35.856 "data_size": 65536 00:19:35.856 }, 00:19:35.856 { 00:19:35.856 "name": "BaseBdev2", 00:19:35.856 "uuid": "e13e983a-d9cb-4ec2-8585-1ba7e1129df4", 00:19:35.856 "is_configured": true, 00:19:35.856 "data_offset": 0, 00:19:35.856 "data_size": 65536 00:19:35.856 } 00:19:35.856 ] 00:19:35.856 }' 00:19:35.856 21:15:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:35.856 21:15:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.856 21:15:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:36.113 21:15:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.113 21:15:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:36.114 [2024-06-07 21:15:58.765634] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:37.075 [2024-06-07 21:15:59.478719] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:19:37.075 21:15:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:37.075 21:15:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.075 21:15:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:37.075 21:15:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:37.075 21:15:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:37.075 21:15:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:37.075 21:15:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.075 21:15:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.334 21:15:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:37.334 "name": "raid_bdev1", 00:19:37.334 "uuid": "b00a0f9c-d0aa-4e22-8f5e-779fb7acbe1e", 00:19:37.334 "strip_size_kb": 0, 00:19:37.334 "state": "online", 00:19:37.334 "raid_level": "raid1", 00:19:37.334 "superblock": false, 00:19:37.334 "num_base_bdevs": 2, 00:19:37.334 "num_base_bdevs_discovered": 2, 00:19:37.334 "num_base_bdevs_operational": 2, 00:19:37.334 "process": { 00:19:37.334 "type": "rebuild", 00:19:37.334 "target": "spare", 00:19:37.334 "progress": { 00:19:37.334 "blocks": 43008, 00:19:37.334 "percent": 65 00:19:37.334 } 00:19:37.334 }, 00:19:37.334 "base_bdevs_list": [ 00:19:37.334 { 00:19:37.334 "name": "spare", 00:19:37.334 "uuid": "5f7730aa-a4ef-5a2e-b08a-4a1f863c639c", 00:19:37.334 "is_configured": true, 00:19:37.334 "data_offset": 0, 00:19:37.334 "data_size": 65536 00:19:37.334 }, 00:19:37.334 { 00:19:37.334 "name": "BaseBdev2", 00:19:37.334 "uuid": "e13e983a-d9cb-4ec2-8585-1ba7e1129df4", 00:19:37.334 "is_configured": true, 00:19:37.334 "data_offset": 0, 00:19:37.334 "data_size": 65536 00:19:37.334 } 00:19:37.334 ] 00:19:37.334 }' 00:19:37.334 21:15:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:37.334 [2024-06-07 21:15:59.802815] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:37.334 21:15:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.334 21:15:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:37.334 21:15:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.334 21:15:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:38.269 21:16:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:38.269 21:16:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.269 21:16:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:38.269 21:16:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:38.269 21:16:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:38.269 21:16:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:38.269 21:16:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.269 21:16:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.527 [2024-06-07 21:16:00.974769] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:38.527 [2024-06-07 21:16:01.074751] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:38.527 [2024-06-07 21:16:01.077524] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.527 21:16:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:38.527 "name": "raid_bdev1", 00:19:38.527 "uuid": "b00a0f9c-d0aa-4e22-8f5e-779fb7acbe1e", 00:19:38.527 "strip_size_kb": 0, 00:19:38.527 "state": "online", 00:19:38.527 "raid_level": "raid1", 00:19:38.527 "superblock": false, 00:19:38.527 "num_base_bdevs": 2, 00:19:38.527 "num_base_bdevs_discovered": 2, 00:19:38.527 "num_base_bdevs_operational": 2, 00:19:38.527 "base_bdevs_list": [ 00:19:38.527 { 00:19:38.528 "name": "spare", 00:19:38.528 "uuid": "5f7730aa-a4ef-5a2e-b08a-4a1f863c639c", 00:19:38.528 "is_configured": true, 00:19:38.528 "data_offset": 0, 00:19:38.528 "data_size": 65536 00:19:38.528 }, 00:19:38.528 { 00:19:38.528 "name": "BaseBdev2", 00:19:38.528 "uuid": "e13e983a-d9cb-4ec2-8585-1ba7e1129df4", 00:19:38.528 "is_configured": true, 00:19:38.528 "data_offset": 0, 00:19:38.528 "data_size": 65536 00:19:38.528 } 00:19:38.528 ] 00:19:38.528 }' 00:19:38.528 21:16:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:38.528 21:16:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:38.528 21:16:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:38.786 21:16:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:38.786 21:16:01 -- bdev/bdev_raid.sh@660 -- # break 00:19:38.786 21:16:01 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:38.786 21:16:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:38.786 21:16:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:38.786 21:16:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:38.786 21:16:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:38.786 21:16:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.786 21:16:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:39.045 "name": "raid_bdev1", 00:19:39.045 "uuid": "b00a0f9c-d0aa-4e22-8f5e-779fb7acbe1e", 00:19:39.045 "strip_size_kb": 0, 00:19:39.045 "state": "online", 00:19:39.045 "raid_level": "raid1", 00:19:39.045 "superblock": false, 00:19:39.045 "num_base_bdevs": 2, 00:19:39.045 "num_base_bdevs_discovered": 2, 00:19:39.045 "num_base_bdevs_operational": 2, 00:19:39.045 "base_bdevs_list": [ 00:19:39.045 { 00:19:39.045 "name": "spare", 00:19:39.045 "uuid": "5f7730aa-a4ef-5a2e-b08a-4a1f863c639c", 00:19:39.045 "is_configured": true, 00:19:39.045 "data_offset": 0, 00:19:39.045 "data_size": 65536 00:19:39.045 }, 00:19:39.045 { 00:19:39.045 "name": "BaseBdev2", 00:19:39.045 "uuid": "e13e983a-d9cb-4ec2-8585-1ba7e1129df4", 00:19:39.045 "is_configured": true, 00:19:39.045 "data_offset": 0, 00:19:39.045 "data_size": 65536 00:19:39.045 } 00:19:39.045 ] 00:19:39.045 }' 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.045 21:16:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.304 21:16:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:39.304 "name": "raid_bdev1", 00:19:39.304 "uuid": "b00a0f9c-d0aa-4e22-8f5e-779fb7acbe1e", 00:19:39.304 "strip_size_kb": 0, 00:19:39.304 "state": "online", 00:19:39.304 "raid_level": "raid1", 00:19:39.304 "superblock": false, 00:19:39.304 "num_base_bdevs": 2, 00:19:39.304 "num_base_bdevs_discovered": 2, 00:19:39.304 "num_base_bdevs_operational": 2, 00:19:39.304 "base_bdevs_list": [ 00:19:39.304 { 00:19:39.304 "name": "spare", 00:19:39.304 "uuid": "5f7730aa-a4ef-5a2e-b08a-4a1f863c639c", 00:19:39.304 "is_configured": true, 00:19:39.304 "data_offset": 0, 00:19:39.304 "data_size": 65536 00:19:39.304 }, 00:19:39.304 { 00:19:39.304 "name": "BaseBdev2", 00:19:39.304 "uuid": "e13e983a-d9cb-4ec2-8585-1ba7e1129df4", 00:19:39.304 "is_configured": true, 00:19:39.304 "data_offset": 0, 00:19:39.304 "data_size": 65536 00:19:39.304 } 00:19:39.304 ] 00:19:39.304 }' 00:19:39.304 21:16:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:39.304 21:16:01 -- common/autotest_common.sh@10 -- # set +x 00:19:40.239 21:16:02 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:40.239 [2024-06-07 21:16:02.751317] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:40.239 [2024-06-07 21:16:02.751372] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.239 00:19:40.239 Latency(us) 00:19:40.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.239 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:40.239 raid_bdev1 : 10.69 108.48 325.45 0.00 0.00 12405.92 294.17 109147.23 00:19:40.239 =================================================================================================================== 00:19:40.239 Total : 108.48 325.45 0.00 0.00 12405.92 294.17 109147.23 00:19:40.239 [2024-06-07 21:16:02.815607] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.239 [2024-06-07 21:16:02.815691] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.239 [2024-06-07 21:16:02.815792] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.239 [2024-06-07 21:16:02.815808] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:19:40.239 0 00:19:40.239 21:16:02 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.239 21:16:02 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:40.498 21:16:03 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:40.498 21:16:03 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:19:40.498 21:16:03 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:40.498 21:16:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:40.498 21:16:03 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:19:40.498 21:16:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:40.498 21:16:03 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:19:40.498 21:16:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:40.498 21:16:03 -- bdev/nbd_common.sh@12 -- # local i 00:19:40.498 21:16:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:40.498 21:16:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.498 21:16:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:40.757 /dev/nbd0 00:19:40.757 21:16:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:40.757 21:16:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:40.757 21:16:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:40.757 21:16:03 -- common/autotest_common.sh@857 -- # local i 00:19:40.757 21:16:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:40.757 21:16:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:40.757 21:16:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:40.757 21:16:03 -- common/autotest_common.sh@861 -- # break 00:19:40.757 21:16:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:40.757 21:16:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:40.757 21:16:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:40.757 1+0 records in 00:19:40.757 1+0 records out 00:19:40.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257679 s, 15.9 MB/s 00:19:40.757 21:16:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.757 21:16:03 -- common/autotest_common.sh@874 -- # size=4096 00:19:40.757 21:16:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.757 21:16:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:40.757 21:16:03 -- common/autotest_common.sh@877 -- # return 0 00:19:40.757 21:16:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:40.757 21:16:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.757 21:16:03 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:19:40.757 21:16:03 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:19:40.757 21:16:03 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:19:40.757 21:16:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:40.757 21:16:03 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:19:40.757 21:16:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:40.757 21:16:03 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:19:40.757 21:16:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:40.757 21:16:03 -- bdev/nbd_common.sh@12 -- # local i 00:19:40.757 21:16:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:40.757 21:16:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.757 21:16:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:41.016 /dev/nbd1 00:19:41.016 21:16:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:41.016 21:16:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:41.016 21:16:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:19:41.016 21:16:03 -- common/autotest_common.sh@857 -- # local i 00:19:41.016 21:16:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:41.016 21:16:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:41.016 21:16:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:19:41.016 21:16:03 -- common/autotest_common.sh@861 -- # break 00:19:41.016 21:16:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:41.016 21:16:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:41.016 21:16:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:41.016 1+0 records in 00:19:41.016 1+0 records out 00:19:41.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266236 s, 15.4 MB/s 00:19:41.016 21:16:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.016 21:16:03 -- common/autotest_common.sh@874 -- # size=4096 00:19:41.016 21:16:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.016 21:16:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:41.016 21:16:03 -- common/autotest_common.sh@877 -- # return 0 00:19:41.016 21:16:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:41.016 21:16:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.016 21:16:03 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:41.016 21:16:03 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:41.016 21:16:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:41.016 21:16:03 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:19:41.016 21:16:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:41.016 21:16:03 -- bdev/nbd_common.sh@51 -- # local i 00:19:41.016 21:16:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:41.016 21:16:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:41.275 21:16:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:41.275 21:16:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:41.275 21:16:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:41.275 21:16:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:41.275 21:16:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.275 21:16:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:41.275 21:16:03 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:19:41.533 21:16:04 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:19:41.533 21:16:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.533 21:16:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:41.533 21:16:04 -- bdev/nbd_common.sh@41 -- # break 00:19:41.533 21:16:04 -- bdev/nbd_common.sh@45 -- # return 0 00:19:41.533 21:16:04 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:41.533 21:16:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:41.533 21:16:04 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:19:41.533 21:16:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:41.533 21:16:04 -- bdev/nbd_common.sh@51 -- # local i 00:19:41.533 21:16:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:41.533 21:16:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:41.792 21:16:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:41.792 21:16:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:41.792 21:16:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:41.792 21:16:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:41.792 21:16:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.792 21:16:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:41.792 21:16:04 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:19:41.792 21:16:04 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:19:41.792 21:16:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.792 21:16:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:41.792 21:16:04 -- bdev/nbd_common.sh@41 -- # break 00:19:41.792 21:16:04 -- bdev/nbd_common.sh@45 -- # return 0 00:19:41.792 21:16:04 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:19:41.792 21:16:04 -- bdev/bdev_raid.sh@709 -- # killprocess 137161 00:19:41.792 21:16:04 -- common/autotest_common.sh@926 -- # '[' -z 137161 ']' 00:19:41.792 21:16:04 -- common/autotest_common.sh@930 -- # kill -0 137161 00:19:41.792 21:16:04 -- common/autotest_common.sh@931 -- # uname 00:19:41.792 21:16:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:41.792 21:16:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137161 00:19:41.792 21:16:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:41.792 killing process with pid 137161 00:19:41.792 21:16:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:41.792 21:16:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137161' 00:19:41.792 21:16:04 -- common/autotest_common.sh@945 -- # kill 137161 00:19:41.792 Received shutdown signal, test time was about 12.305809 seconds 00:19:41.792 00:19:41.792 Latency(us) 00:19:41.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.792 =================================================================================================================== 00:19:41.792 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:41.792 21:16:04 -- common/autotest_common.sh@950 -- # wait 137161 00:19:41.792 [2024-06-07 21:16:04.424012] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:41.792 [2024-06-07 21:16:04.459777] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:42.360 00:19:42.360 real 0m16.558s 00:19:42.360 user 0m26.305s 00:19:42.360 sys 0m1.905s 00:19:42.360 21:16:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:42.360 ************************************ 00:19:42.360 END TEST raid_rebuild_test_io 00:19:42.360 ************************************ 00:19:42.360 21:16:04 -- common/autotest_common.sh@10 -- # set +x 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:19:42.360 21:16:04 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:19:42.360 21:16:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:42.360 21:16:04 -- common/autotest_common.sh@10 -- # set +x 00:19:42.360 ************************************ 00:19:42.360 START TEST raid_rebuild_test_sb_io 00:19:42.360 ************************************ 00:19:42.360 21:16:04 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@544 -- # raid_pid=137645 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@545 -- # waitforlisten 137645 /var/tmp/spdk-raid.sock 00:19:42.360 21:16:04 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:42.360 21:16:04 -- common/autotest_common.sh@819 -- # '[' -z 137645 ']' 00:19:42.360 21:16:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:42.360 21:16:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:42.360 21:16:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:42.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:42.360 21:16:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:42.360 21:16:04 -- common/autotest_common.sh@10 -- # set +x 00:19:42.360 [2024-06-07 21:16:04.912689] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:42.360 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:42.360 Zero copy mechanism will not be used. 00:19:42.360 [2024-06-07 21:16:04.912994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137645 ] 00:19:42.619 [2024-06-07 21:16:05.078438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.619 [2024-06-07 21:16:05.165129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.619 [2024-06-07 21:16:05.241684] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.186 21:16:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:43.186 21:16:05 -- common/autotest_common.sh@852 -- # return 0 00:19:43.186 21:16:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:43.186 21:16:05 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:43.186 21:16:05 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:43.444 BaseBdev1_malloc 00:19:43.444 21:16:06 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:43.702 [2024-06-07 21:16:06.286144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:43.702 [2024-06-07 21:16:06.286835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.702 [2024-06-07 21:16:06.287048] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:43.702 [2024-06-07 21:16:06.287235] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.702 [2024-06-07 21:16:06.290240] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.702 [2024-06-07 21:16:06.290411] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:43.702 BaseBdev1 00:19:43.702 21:16:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:43.702 21:16:06 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:43.702 21:16:06 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:43.960 BaseBdev2_malloc 00:19:43.960 21:16:06 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:44.218 [2024-06-07 21:16:06.773310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:44.218 [2024-06-07 21:16:06.773536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.218 [2024-06-07 21:16:06.773697] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:44.218 [2024-06-07 21:16:06.773893] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.218 [2024-06-07 21:16:06.776814] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.218 [2024-06-07 21:16:06.777047] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:44.218 BaseBdev2 00:19:44.218 21:16:06 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:44.478 spare_malloc 00:19:44.478 21:16:07 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:44.736 spare_delay 00:19:44.736 21:16:07 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:44.995 [2024-06-07 21:16:07.524416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:44.995 [2024-06-07 21:16:07.524551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.995 [2024-06-07 21:16:07.524609] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:44.995 [2024-06-07 21:16:07.524661] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.995 [2024-06-07 21:16:07.527024] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.995 [2024-06-07 21:16:07.527090] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:44.995 spare 00:19:44.995 21:16:07 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:45.254 [2024-06-07 21:16:07.752470] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:45.254 [2024-06-07 21:16:07.754837] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:45.254 [2024-06-07 21:16:07.755109] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:19:45.254 [2024-06-07 21:16:07.755135] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:45.254 [2024-06-07 21:16:07.755353] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:45.254 [2024-06-07 21:16:07.755865] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:19:45.254 [2024-06-07 21:16:07.755898] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:19:45.254 [2024-06-07 21:16:07.756070] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.254 21:16:07 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:45.254 21:16:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:45.254 21:16:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:45.254 21:16:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:45.254 21:16:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:45.254 21:16:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:45.254 21:16:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:45.254 21:16:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:45.254 21:16:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:45.254 21:16:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:45.254 21:16:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.254 21:16:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.512 21:16:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:45.512 "name": "raid_bdev1", 00:19:45.512 "uuid": "84a00c3c-fc30-440e-bde6-8f1fcf53a3c6", 00:19:45.512 "strip_size_kb": 0, 00:19:45.512 "state": "online", 00:19:45.512 "raid_level": "raid1", 00:19:45.512 "superblock": true, 00:19:45.512 "num_base_bdevs": 2, 00:19:45.512 "num_base_bdevs_discovered": 2, 00:19:45.512 "num_base_bdevs_operational": 2, 00:19:45.512 "base_bdevs_list": [ 00:19:45.512 { 00:19:45.512 "name": "BaseBdev1", 00:19:45.512 "uuid": "5d1b12bf-434b-55a6-95dd-3b24b113906a", 00:19:45.512 "is_configured": true, 00:19:45.512 "data_offset": 2048, 00:19:45.512 "data_size": 63488 00:19:45.512 }, 00:19:45.512 { 00:19:45.512 "name": "BaseBdev2", 00:19:45.512 "uuid": "19eaa77a-59ac-5d5d-99ab-f6f0d1a784df", 00:19:45.512 "is_configured": true, 00:19:45.512 "data_offset": 2048, 00:19:45.512 "data_size": 63488 00:19:45.512 } 00:19:45.512 ] 00:19:45.512 }' 00:19:45.512 21:16:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:45.512 21:16:07 -- common/autotest_common.sh@10 -- # set +x 00:19:46.078 21:16:08 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:46.078 21:16:08 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:46.335 [2024-06-07 21:16:08.788907] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:46.335 21:16:08 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:19:46.335 21:16:08 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.335 21:16:08 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:46.592 21:16:09 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:19:46.592 21:16:09 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:19:46.592 21:16:09 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:46.592 21:16:09 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:46.592 [2024-06-07 21:16:09.112438] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:19:46.592 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:46.592 Zero copy mechanism will not be used. 00:19:46.592 Running I/O for 60 seconds... 00:19:46.592 [2024-06-07 21:16:09.252124] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:46.592 [2024-06-07 21:16:09.266032] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:19:46.850 21:16:09 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:46.850 21:16:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:46.850 21:16:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:46.850 21:16:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:46.850 21:16:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:46.850 21:16:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:46.850 21:16:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:46.850 21:16:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:46.850 21:16:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:46.850 21:16:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:46.850 21:16:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.850 21:16:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.850 21:16:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:46.850 "name": "raid_bdev1", 00:19:46.850 "uuid": "84a00c3c-fc30-440e-bde6-8f1fcf53a3c6", 00:19:46.850 "strip_size_kb": 0, 00:19:46.850 "state": "online", 00:19:46.850 "raid_level": "raid1", 00:19:46.850 "superblock": true, 00:19:46.850 "num_base_bdevs": 2, 00:19:46.850 "num_base_bdevs_discovered": 1, 00:19:46.850 "num_base_bdevs_operational": 1, 00:19:46.850 "base_bdevs_list": [ 00:19:46.850 { 00:19:46.850 "name": null, 00:19:46.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.850 "is_configured": false, 00:19:46.850 "data_offset": 2048, 00:19:46.850 "data_size": 63488 00:19:46.850 }, 00:19:46.850 { 00:19:46.850 "name": "BaseBdev2", 00:19:46.850 "uuid": "19eaa77a-59ac-5d5d-99ab-f6f0d1a784df", 00:19:46.850 "is_configured": true, 00:19:46.850 "data_offset": 2048, 00:19:46.850 "data_size": 63488 00:19:46.850 } 00:19:46.850 ] 00:19:46.850 }' 00:19:46.850 21:16:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:46.850 21:16:09 -- common/autotest_common.sh@10 -- # set +x 00:19:47.784 21:16:10 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:47.785 [2024-06-07 21:16:10.429781] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:47.785 [2024-06-07 21:16:10.429874] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.044 21:16:10 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:48.044 [2024-06-07 21:16:10.464496] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:19:48.044 [2024-06-07 21:16:10.466813] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:48.044 [2024-06-07 21:16:10.584496] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:48.044 [2024-06-07 21:16:10.715992] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:48.044 [2024-06-07 21:16:10.716260] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:48.611 [2024-06-07 21:16:11.184223] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:48.870 [2024-06-07 21:16:11.432401] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:48.870 [2024-06-07 21:16:11.433037] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:48.870 21:16:11 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.870 21:16:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:48.870 21:16:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:48.870 21:16:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:48.870 21:16:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:48.870 21:16:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.870 21:16:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.129 [2024-06-07 21:16:11.645188] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:49.129 [2024-06-07 21:16:11.645398] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:49.129 21:16:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:49.129 "name": "raid_bdev1", 00:19:49.129 "uuid": "84a00c3c-fc30-440e-bde6-8f1fcf53a3c6", 00:19:49.129 "strip_size_kb": 0, 00:19:49.129 "state": "online", 00:19:49.129 "raid_level": "raid1", 00:19:49.129 "superblock": true, 00:19:49.129 "num_base_bdevs": 2, 00:19:49.129 "num_base_bdevs_discovered": 2, 00:19:49.129 "num_base_bdevs_operational": 2, 00:19:49.129 "process": { 00:19:49.129 "type": "rebuild", 00:19:49.129 "target": "spare", 00:19:49.129 "progress": { 00:19:49.129 "blocks": 16384, 00:19:49.129 "percent": 25 00:19:49.129 } 00:19:49.129 }, 00:19:49.129 "base_bdevs_list": [ 00:19:49.129 { 00:19:49.129 "name": "spare", 00:19:49.129 "uuid": "6470ce9f-783b-58e4-bda3-f89123957db4", 00:19:49.129 "is_configured": true, 00:19:49.129 "data_offset": 2048, 00:19:49.129 "data_size": 63488 00:19:49.129 }, 00:19:49.129 { 00:19:49.129 "name": "BaseBdev2", 00:19:49.129 "uuid": "19eaa77a-59ac-5d5d-99ab-f6f0d1a784df", 00:19:49.129 "is_configured": true, 00:19:49.129 "data_offset": 2048, 00:19:49.129 "data_size": 63488 00:19:49.129 } 00:19:49.129 ] 00:19:49.129 }' 00:19:49.129 21:16:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:49.129 21:16:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.129 21:16:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:49.129 21:16:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.129 21:16:11 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:49.387 [2024-06-07 21:16:11.887770] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:49.387 [2024-06-07 21:16:11.888064] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:49.387 [2024-06-07 21:16:12.037220] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.646 [2024-06-07 21:16:12.137526] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:49.647 [2024-06-07 21:16:12.139863] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.647 [2024-06-07 21:16:12.163605] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:19:49.647 21:16:12 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:49.647 21:16:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:49.647 21:16:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:49.647 21:16:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:49.647 21:16:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:49.647 21:16:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:49.647 21:16:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:49.647 21:16:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:49.647 21:16:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:49.647 21:16:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:49.647 21:16:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.647 21:16:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.906 21:16:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:49.906 "name": "raid_bdev1", 00:19:49.906 "uuid": "84a00c3c-fc30-440e-bde6-8f1fcf53a3c6", 00:19:49.906 "strip_size_kb": 0, 00:19:49.906 "state": "online", 00:19:49.906 "raid_level": "raid1", 00:19:49.906 "superblock": true, 00:19:49.906 "num_base_bdevs": 2, 00:19:49.906 "num_base_bdevs_discovered": 1, 00:19:49.906 "num_base_bdevs_operational": 1, 00:19:49.906 "base_bdevs_list": [ 00:19:49.906 { 00:19:49.906 "name": null, 00:19:49.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.906 "is_configured": false, 00:19:49.906 "data_offset": 2048, 00:19:49.906 "data_size": 63488 00:19:49.906 }, 00:19:49.906 { 00:19:49.906 "name": "BaseBdev2", 00:19:49.906 "uuid": "19eaa77a-59ac-5d5d-99ab-f6f0d1a784df", 00:19:49.906 "is_configured": true, 00:19:49.906 "data_offset": 2048, 00:19:49.906 "data_size": 63488 00:19:49.906 } 00:19:49.906 ] 00:19:49.906 }' 00:19:49.906 21:16:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:49.906 21:16:12 -- common/autotest_common.sh@10 -- # set +x 00:19:50.475 21:16:13 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:50.475 21:16:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:50.475 21:16:13 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:50.475 21:16:13 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:50.475 21:16:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:50.475 21:16:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.475 21:16:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.733 21:16:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:50.733 "name": "raid_bdev1", 00:19:50.733 "uuid": "84a00c3c-fc30-440e-bde6-8f1fcf53a3c6", 00:19:50.733 "strip_size_kb": 0, 00:19:50.733 "state": "online", 00:19:50.733 "raid_level": "raid1", 00:19:50.733 "superblock": true, 00:19:50.733 "num_base_bdevs": 2, 00:19:50.733 "num_base_bdevs_discovered": 1, 00:19:50.733 "num_base_bdevs_operational": 1, 00:19:50.733 "base_bdevs_list": [ 00:19:50.733 { 00:19:50.733 "name": null, 00:19:50.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.733 "is_configured": false, 00:19:50.733 "data_offset": 2048, 00:19:50.733 "data_size": 63488 00:19:50.733 }, 00:19:50.733 { 00:19:50.733 "name": "BaseBdev2", 00:19:50.733 "uuid": "19eaa77a-59ac-5d5d-99ab-f6f0d1a784df", 00:19:50.733 "is_configured": true, 00:19:50.733 "data_offset": 2048, 00:19:50.733 "data_size": 63488 00:19:50.733 } 00:19:50.733 ] 00:19:50.733 }' 00:19:50.733 21:16:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:50.992 21:16:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:50.992 21:16:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:50.992 21:16:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:50.992 21:16:13 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:51.251 [2024-06-07 21:16:13.732519] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:51.251 [2024-06-07 21:16:13.732633] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.251 21:16:13 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:51.251 [2024-06-07 21:16:13.789074] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:51.251 [2024-06-07 21:16:13.791404] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:51.251 [2024-06-07 21:16:13.894580] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:51.251 [2024-06-07 21:16:13.895083] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:51.513 [2024-06-07 21:16:14.109164] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:51.513 [2024-06-07 21:16:14.109446] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:51.779 [2024-06-07 21:16:14.447742] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:52.037 [2024-06-07 21:16:14.563703] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:52.037 [2024-06-07 21:16:14.564219] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:52.296 21:16:14 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.296 21:16:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:52.296 21:16:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:52.296 21:16:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:52.296 21:16:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:52.296 21:16:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.296 21:16:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:52.555 "name": "raid_bdev1", 00:19:52.555 "uuid": "84a00c3c-fc30-440e-bde6-8f1fcf53a3c6", 00:19:52.555 "strip_size_kb": 0, 00:19:52.555 "state": "online", 00:19:52.555 "raid_level": "raid1", 00:19:52.555 "superblock": true, 00:19:52.555 "num_base_bdevs": 2, 00:19:52.555 "num_base_bdevs_discovered": 2, 00:19:52.555 "num_base_bdevs_operational": 2, 00:19:52.555 "process": { 00:19:52.555 "type": "rebuild", 00:19:52.555 "target": "spare", 00:19:52.555 "progress": { 00:19:52.555 "blocks": 16384, 00:19:52.555 "percent": 25 00:19:52.555 } 00:19:52.555 }, 00:19:52.555 "base_bdevs_list": [ 00:19:52.555 { 00:19:52.555 "name": "spare", 00:19:52.555 "uuid": "6470ce9f-783b-58e4-bda3-f89123957db4", 00:19:52.555 "is_configured": true, 00:19:52.555 "data_offset": 2048, 00:19:52.555 "data_size": 63488 00:19:52.555 }, 00:19:52.555 { 00:19:52.555 "name": "BaseBdev2", 00:19:52.555 "uuid": "19eaa77a-59ac-5d5d-99ab-f6f0d1a784df", 00:19:52.555 "is_configured": true, 00:19:52.555 "data_offset": 2048, 00:19:52.555 "data_size": 63488 00:19:52.555 } 00:19:52.555 ] 00:19:52.555 }' 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:19:52.555 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@657 -- # local timeout=435 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.555 21:16:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.814 21:16:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:52.814 "name": "raid_bdev1", 00:19:52.814 "uuid": "84a00c3c-fc30-440e-bde6-8f1fcf53a3c6", 00:19:52.814 "strip_size_kb": 0, 00:19:52.814 "state": "online", 00:19:52.814 "raid_level": "raid1", 00:19:52.814 "superblock": true, 00:19:52.814 "num_base_bdevs": 2, 00:19:52.814 "num_base_bdevs_discovered": 2, 00:19:52.814 "num_base_bdevs_operational": 2, 00:19:52.814 "process": { 00:19:52.814 "type": "rebuild", 00:19:52.814 "target": "spare", 00:19:52.814 "progress": { 00:19:52.814 "blocks": 22528, 00:19:52.814 "percent": 35 00:19:52.814 } 00:19:52.814 }, 00:19:52.814 "base_bdevs_list": [ 00:19:52.814 { 00:19:52.814 "name": "spare", 00:19:52.814 "uuid": "6470ce9f-783b-58e4-bda3-f89123957db4", 00:19:52.814 "is_configured": true, 00:19:52.814 "data_offset": 2048, 00:19:52.814 "data_size": 63488 00:19:52.814 }, 00:19:52.814 { 00:19:52.814 "name": "BaseBdev2", 00:19:52.814 "uuid": "19eaa77a-59ac-5d5d-99ab-f6f0d1a784df", 00:19:52.814 "is_configured": true, 00:19:52.814 "data_offset": 2048, 00:19:52.814 "data_size": 63488 00:19:52.814 } 00:19:52.814 ] 00:19:52.814 }' 00:19:52.814 21:16:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:52.814 21:16:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.814 21:16:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:53.073 [2024-06-07 21:16:15.494801] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:53.073 21:16:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.073 21:16:15 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:53.073 [2024-06-07 21:16:15.724360] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:53.332 [2024-06-07 21:16:15.972675] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:53.592 [2024-06-07 21:16:16.094809] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:53.851 [2024-06-07 21:16:16.358134] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:19:53.851 21:16:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:53.851 21:16:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.851 21:16:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:53.851 21:16:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:53.851 21:16:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:53.851 21:16:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:54.109 21:16:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.109 21:16:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.368 21:16:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:54.368 "name": "raid_bdev1", 00:19:54.368 "uuid": "84a00c3c-fc30-440e-bde6-8f1fcf53a3c6", 00:19:54.368 "strip_size_kb": 0, 00:19:54.368 "state": "online", 00:19:54.368 "raid_level": "raid1", 00:19:54.368 "superblock": true, 00:19:54.368 "num_base_bdevs": 2, 00:19:54.368 "num_base_bdevs_discovered": 2, 00:19:54.368 "num_base_bdevs_operational": 2, 00:19:54.368 "process": { 00:19:54.368 "type": "rebuild", 00:19:54.368 "target": "spare", 00:19:54.368 "progress": { 00:19:54.368 "blocks": 43008, 00:19:54.368 "percent": 67 00:19:54.368 } 00:19:54.368 }, 00:19:54.368 "base_bdevs_list": [ 00:19:54.368 { 00:19:54.368 "name": "spare", 00:19:54.368 "uuid": "6470ce9f-783b-58e4-bda3-f89123957db4", 00:19:54.368 "is_configured": true, 00:19:54.368 "data_offset": 2048, 00:19:54.368 "data_size": 63488 00:19:54.368 }, 00:19:54.368 { 00:19:54.368 "name": "BaseBdev2", 00:19:54.368 "uuid": "19eaa77a-59ac-5d5d-99ab-f6f0d1a784df", 00:19:54.368 "is_configured": true, 00:19:54.368 "data_offset": 2048, 00:19:54.368 "data_size": 63488 00:19:54.368 } 00:19:54.368 ] 00:19:54.368 }' 00:19:54.368 21:16:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:54.368 21:16:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.368 21:16:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:54.368 21:16:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.368 21:16:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:54.626 [2024-06-07 21:16:17.119847] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:19:55.561 [2024-06-07 21:16:17.883115] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:55.561 21:16:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:55.561 21:16:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.561 21:16:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:55.561 21:16:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:55.561 21:16:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:55.561 21:16:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:55.561 21:16:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.561 21:16:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.561 [2024-06-07 21:16:17.983190] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:55.561 [2024-06-07 21:16:17.985613] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.561 21:16:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:55.561 "name": "raid_bdev1", 00:19:55.561 "uuid": "84a00c3c-fc30-440e-bde6-8f1fcf53a3c6", 00:19:55.561 "strip_size_kb": 0, 00:19:55.561 "state": "online", 00:19:55.561 "raid_level": "raid1", 00:19:55.561 "superblock": true, 00:19:55.561 "num_base_bdevs": 2, 00:19:55.561 "num_base_bdevs_discovered": 2, 00:19:55.561 "num_base_bdevs_operational": 2, 00:19:55.561 "base_bdevs_list": [ 00:19:55.561 { 00:19:55.561 "name": "spare", 00:19:55.561 "uuid": "6470ce9f-783b-58e4-bda3-f89123957db4", 00:19:55.561 "is_configured": true, 00:19:55.561 "data_offset": 2048, 00:19:55.561 "data_size": 63488 00:19:55.561 }, 00:19:55.561 { 00:19:55.561 "name": "BaseBdev2", 00:19:55.561 "uuid": "19eaa77a-59ac-5d5d-99ab-f6f0d1a784df", 00:19:55.561 "is_configured": true, 00:19:55.561 "data_offset": 2048, 00:19:55.561 "data_size": 63488 00:19:55.561 } 00:19:55.561 ] 00:19:55.561 }' 00:19:55.561 21:16:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:55.561 21:16:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:55.561 21:16:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:55.819 21:16:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:55.819 21:16:18 -- bdev/bdev_raid.sh@660 -- # break 00:19:55.819 21:16:18 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:55.819 21:16:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:55.819 21:16:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:55.819 21:16:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:55.819 21:16:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:55.819 21:16:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.819 21:16:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.819 21:16:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:55.819 "name": "raid_bdev1", 00:19:55.819 "uuid": "84a00c3c-fc30-440e-bde6-8f1fcf53a3c6", 00:19:55.819 "strip_size_kb": 0, 00:19:55.819 "state": "online", 00:19:55.819 "raid_level": "raid1", 00:19:55.819 "superblock": true, 00:19:55.819 "num_base_bdevs": 2, 00:19:55.819 "num_base_bdevs_discovered": 2, 00:19:55.819 "num_base_bdevs_operational": 2, 00:19:55.819 "base_bdevs_list": [ 00:19:55.819 { 00:19:55.819 "name": "spare", 00:19:55.819 "uuid": "6470ce9f-783b-58e4-bda3-f89123957db4", 00:19:55.819 "is_configured": true, 00:19:55.819 "data_offset": 2048, 00:19:55.819 "data_size": 63488 00:19:55.819 }, 00:19:55.819 { 00:19:55.819 "name": "BaseBdev2", 00:19:55.819 "uuid": "19eaa77a-59ac-5d5d-99ab-f6f0d1a784df", 00:19:55.819 "is_configured": true, 00:19:55.819 "data_offset": 2048, 00:19:55.819 "data_size": 63488 00:19:55.819 } 00:19:55.819 ] 00:19:55.819 }' 00:19:55.819 21:16:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:56.077 21:16:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:56.077 21:16:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:56.077 21:16:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:56.077 21:16:18 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:56.077 21:16:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:56.077 21:16:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:56.077 21:16:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:56.077 21:16:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:56.077 21:16:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:56.077 21:16:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:56.077 21:16:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:56.077 21:16:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:56.077 21:16:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:56.077 21:16:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.077 21:16:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.335 21:16:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:56.335 "name": "raid_bdev1", 00:19:56.335 "uuid": "84a00c3c-fc30-440e-bde6-8f1fcf53a3c6", 00:19:56.335 "strip_size_kb": 0, 00:19:56.335 "state": "online", 00:19:56.335 "raid_level": "raid1", 00:19:56.335 "superblock": true, 00:19:56.335 "num_base_bdevs": 2, 00:19:56.335 "num_base_bdevs_discovered": 2, 00:19:56.335 "num_base_bdevs_operational": 2, 00:19:56.335 "base_bdevs_list": [ 00:19:56.335 { 00:19:56.335 "name": "spare", 00:19:56.335 "uuid": "6470ce9f-783b-58e4-bda3-f89123957db4", 00:19:56.335 "is_configured": true, 00:19:56.335 "data_offset": 2048, 00:19:56.335 "data_size": 63488 00:19:56.335 }, 00:19:56.335 { 00:19:56.335 "name": "BaseBdev2", 00:19:56.335 "uuid": "19eaa77a-59ac-5d5d-99ab-f6f0d1a784df", 00:19:56.335 "is_configured": true, 00:19:56.335 "data_offset": 2048, 00:19:56.335 "data_size": 63488 00:19:56.335 } 00:19:56.335 ] 00:19:56.335 }' 00:19:56.335 21:16:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:56.335 21:16:18 -- common/autotest_common.sh@10 -- # set +x 00:19:56.900 21:16:19 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:57.158 [2024-06-07 21:16:19.759827] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:57.158 [2024-06-07 21:16:19.759883] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:57.416 00:19:57.416 Latency(us) 00:19:57.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.416 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:57.416 raid_bdev1 : 10.74 110.05 330.15 0.00 0.00 12518.53 288.58 117726.49 00:19:57.416 =================================================================================================================== 00:19:57.416 Total : 110.05 330.15 0.00 0.00 12518.53 288.58 117726.49 00:19:57.416 [2024-06-07 21:16:19.859942] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.416 [2024-06-07 21:16:19.860022] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.416 [2024-06-07 21:16:19.860127] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.416 [2024-06-07 21:16:19.860144] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:19:57.416 0 00:19:57.416 21:16:19 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:57.416 21:16:19 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.416 21:16:20 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:57.416 21:16:20 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:19:57.416 21:16:20 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:57.416 21:16:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:57.416 21:16:20 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:19:57.416 21:16:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:57.416 21:16:20 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:19:57.416 21:16:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:57.416 21:16:20 -- bdev/nbd_common.sh@12 -- # local i 00:19:57.416 21:16:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:57.416 21:16:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:57.416 21:16:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:57.674 /dev/nbd0 00:19:57.932 21:16:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:57.932 21:16:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:57.932 21:16:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:57.932 21:16:20 -- common/autotest_common.sh@857 -- # local i 00:19:57.932 21:16:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:57.932 21:16:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:57.932 21:16:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:57.932 21:16:20 -- common/autotest_common.sh@861 -- # break 00:19:57.932 21:16:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:57.932 21:16:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:57.932 21:16:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:57.932 1+0 records in 00:19:57.932 1+0 records out 00:19:57.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446751 s, 9.2 MB/s 00:19:57.932 21:16:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.932 21:16:20 -- common/autotest_common.sh@874 -- # size=4096 00:19:57.932 21:16:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.932 21:16:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:57.932 21:16:20 -- common/autotest_common.sh@877 -- # return 0 00:19:57.932 21:16:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:57.932 21:16:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:57.932 21:16:20 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:19:57.932 21:16:20 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:19:57.932 21:16:20 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:19:57.932 21:16:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:57.932 21:16:20 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:19:57.932 21:16:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:57.932 21:16:20 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:19:57.932 21:16:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:57.932 21:16:20 -- bdev/nbd_common.sh@12 -- # local i 00:19:57.932 21:16:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:57.932 21:16:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:57.932 21:16:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:57.932 /dev/nbd1 00:19:58.190 21:16:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:58.190 21:16:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:58.190 21:16:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:19:58.190 21:16:20 -- common/autotest_common.sh@857 -- # local i 00:19:58.190 21:16:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:58.190 21:16:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:58.190 21:16:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:19:58.190 21:16:20 -- common/autotest_common.sh@861 -- # break 00:19:58.190 21:16:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:58.190 21:16:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:58.190 21:16:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:58.190 1+0 records in 00:19:58.190 1+0 records out 00:19:58.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614245 s, 6.7 MB/s 00:19:58.190 21:16:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.190 21:16:20 -- common/autotest_common.sh@874 -- # size=4096 00:19:58.190 21:16:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.190 21:16:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:58.190 21:16:20 -- common/autotest_common.sh@877 -- # return 0 00:19:58.190 21:16:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:58.190 21:16:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:58.190 21:16:20 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:58.190 21:16:20 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:58.190 21:16:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:58.190 21:16:20 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:19:58.190 21:16:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:58.190 21:16:20 -- bdev/nbd_common.sh@51 -- # local i 00:19:58.190 21:16:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:58.190 21:16:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:58.457 21:16:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:58.457 21:16:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:58.457 21:16:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:58.457 21:16:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:58.457 21:16:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.457 21:16:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:58.457 21:16:20 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:19:58.457 21:16:21 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:19:58.457 21:16:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.457 21:16:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:58.457 21:16:21 -- bdev/nbd_common.sh@41 -- # break 00:19:58.457 21:16:21 -- bdev/nbd_common.sh@45 -- # return 0 00:19:58.457 21:16:21 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:58.457 21:16:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:58.457 21:16:21 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:19:58.457 21:16:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:58.457 21:16:21 -- bdev/nbd_common.sh@51 -- # local i 00:19:58.457 21:16:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:58.457 21:16:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:58.730 21:16:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:58.730 21:16:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:58.730 21:16:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:58.730 21:16:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:58.730 21:16:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.730 21:16:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:58.730 21:16:21 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:19:58.988 21:16:21 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:19:58.988 21:16:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.988 21:16:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:58.988 21:16:21 -- bdev/nbd_common.sh@41 -- # break 00:19:58.988 21:16:21 -- bdev/nbd_common.sh@45 -- # return 0 00:19:58.988 21:16:21 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:19:58.988 21:16:21 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:58.988 21:16:21 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:19:58.988 21:16:21 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:59.246 21:16:21 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:59.246 [2024-06-07 21:16:21.873557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:59.246 [2024-06-07 21:16:21.873703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.246 [2024-06-07 21:16:21.873754] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:59.246 [2024-06-07 21:16:21.873789] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.246 [2024-06-07 21:16:21.876638] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.246 [2024-06-07 21:16:21.876714] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:59.246 [2024-06-07 21:16:21.876845] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:59.246 [2024-06-07 21:16:21.877013] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:59.246 BaseBdev1 00:19:59.246 21:16:21 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:59.246 21:16:21 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:19:59.246 21:16:21 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:19:59.504 21:16:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:59.763 [2024-06-07 21:16:22.337623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:59.763 [2024-06-07 21:16:22.337730] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.763 [2024-06-07 21:16:22.337776] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:59.763 [2024-06-07 21:16:22.337810] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.763 [2024-06-07 21:16:22.338446] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.763 [2024-06-07 21:16:22.338522] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:59.763 [2024-06-07 21:16:22.338630] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:19:59.763 [2024-06-07 21:16:22.338649] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:19:59.763 [2024-06-07 21:16:22.338657] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:59.763 [2024-06-07 21:16:22.338688] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:19:59.763 [2024-06-07 21:16:22.338779] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:59.763 BaseBdev2 00:19:59.763 21:16:22 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:00.021 21:16:22 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:00.280 [2024-06-07 21:16:22.733712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:00.280 [2024-06-07 21:16:22.733770] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.280 [2024-06-07 21:16:22.733812] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:00.280 [2024-06-07 21:16:22.733835] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.280 [2024-06-07 21:16:22.734351] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.280 [2024-06-07 21:16:22.734419] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:00.280 [2024-06-07 21:16:22.734537] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:00.280 [2024-06-07 21:16:22.734575] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:00.280 spare 00:20:00.280 21:16:22 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:00.280 21:16:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:00.280 21:16:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:00.280 21:16:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:00.280 21:16:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:00.280 21:16:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:00.280 21:16:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:00.280 21:16:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:00.280 21:16:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:00.280 21:16:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:00.280 21:16:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.280 21:16:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.280 [2024-06-07 21:16:22.834701] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:20:00.280 [2024-06-07 21:16:22.834728] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:00.280 [2024-06-07 21:16:22.834924] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002c930 00:20:00.280 [2024-06-07 21:16:22.835425] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:20:00.280 [2024-06-07 21:16:22.835469] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:20:00.280 [2024-06-07 21:16:22.835631] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.538 21:16:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:00.538 "name": "raid_bdev1", 00:20:00.538 "uuid": "84a00c3c-fc30-440e-bde6-8f1fcf53a3c6", 00:20:00.538 "strip_size_kb": 0, 00:20:00.538 "state": "online", 00:20:00.538 "raid_level": "raid1", 00:20:00.538 "superblock": true, 00:20:00.538 "num_base_bdevs": 2, 00:20:00.538 "num_base_bdevs_discovered": 2, 00:20:00.538 "num_base_bdevs_operational": 2, 00:20:00.538 "base_bdevs_list": [ 00:20:00.538 { 00:20:00.538 "name": "spare", 00:20:00.538 "uuid": "6470ce9f-783b-58e4-bda3-f89123957db4", 00:20:00.538 "is_configured": true, 00:20:00.538 "data_offset": 2048, 00:20:00.538 "data_size": 63488 00:20:00.538 }, 00:20:00.538 { 00:20:00.538 "name": "BaseBdev2", 00:20:00.538 "uuid": "19eaa77a-59ac-5d5d-99ab-f6f0d1a784df", 00:20:00.538 "is_configured": true, 00:20:00.538 "data_offset": 2048, 00:20:00.538 "data_size": 63488 00:20:00.538 } 00:20:00.538 ] 00:20:00.538 }' 00:20:00.538 21:16:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:00.538 21:16:22 -- common/autotest_common.sh@10 -- # set +x 00:20:01.103 21:16:23 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:01.103 21:16:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:01.103 21:16:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:01.103 21:16:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:01.103 21:16:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:01.103 21:16:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.103 21:16:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.361 21:16:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:01.361 "name": "raid_bdev1", 00:20:01.361 "uuid": "84a00c3c-fc30-440e-bde6-8f1fcf53a3c6", 00:20:01.361 "strip_size_kb": 0, 00:20:01.361 "state": "online", 00:20:01.361 "raid_level": "raid1", 00:20:01.361 "superblock": true, 00:20:01.361 "num_base_bdevs": 2, 00:20:01.361 "num_base_bdevs_discovered": 2, 00:20:01.361 "num_base_bdevs_operational": 2, 00:20:01.361 "base_bdevs_list": [ 00:20:01.361 { 00:20:01.361 "name": "spare", 00:20:01.361 "uuid": "6470ce9f-783b-58e4-bda3-f89123957db4", 00:20:01.361 "is_configured": true, 00:20:01.361 "data_offset": 2048, 00:20:01.361 "data_size": 63488 00:20:01.361 }, 00:20:01.361 { 00:20:01.361 "name": "BaseBdev2", 00:20:01.361 "uuid": "19eaa77a-59ac-5d5d-99ab-f6f0d1a784df", 00:20:01.361 "is_configured": true, 00:20:01.361 "data_offset": 2048, 00:20:01.361 "data_size": 63488 00:20:01.361 } 00:20:01.361 ] 00:20:01.361 }' 00:20:01.361 21:16:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:01.361 21:16:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:01.361 21:16:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:01.361 21:16:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:01.361 21:16:23 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.361 21:16:23 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:01.619 21:16:24 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.619 21:16:24 -- bdev/bdev_raid.sh@709 -- # killprocess 137645 00:20:01.619 21:16:24 -- common/autotest_common.sh@926 -- # '[' -z 137645 ']' 00:20:01.619 21:16:24 -- common/autotest_common.sh@930 -- # kill -0 137645 00:20:01.619 21:16:24 -- common/autotest_common.sh@931 -- # uname 00:20:01.619 21:16:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:01.620 21:16:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137645 00:20:01.620 killing process with pid 137645 00:20:01.620 Received shutdown signal, test time was about 15.040500 seconds 00:20:01.620 00:20:01.620 Latency(us) 00:20:01.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.620 =================================================================================================================== 00:20:01.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.620 21:16:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:01.620 21:16:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:01.620 21:16:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137645' 00:20:01.620 21:16:24 -- common/autotest_common.sh@945 -- # kill 137645 00:20:01.620 21:16:24 -- common/autotest_common.sh@950 -- # wait 137645 00:20:01.620 [2024-06-07 21:16:24.155566] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:01.620 [2024-06-07 21:16:24.155734] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:01.620 [2024-06-07 21:16:24.155847] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:01.620 [2024-06-07 21:16:24.155871] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:20:01.620 [2024-06-07 21:16:24.191075] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:01.878 ************************************ 00:20:01.878 END TEST raid_rebuild_test_sb_io 00:20:01.878 ************************************ 00:20:01.878 21:16:24 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:01.878 00:20:01.878 real 0m19.682s 00:20:01.878 user 0m32.345s 00:20:01.878 sys 0m2.156s 00:20:01.878 21:16:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.878 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:20:02.137 21:16:24 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:02.137 21:16:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:02.137 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:20:02.137 ************************************ 00:20:02.137 START TEST raid_rebuild_test 00:20:02.137 ************************************ 00:20:02.137 21:16:24 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@544 -- # raid_pid=138226 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@545 -- # waitforlisten 138226 /var/tmp/spdk-raid.sock 00:20:02.137 21:16:24 -- common/autotest_common.sh@819 -- # '[' -z 138226 ']' 00:20:02.137 21:16:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:02.137 21:16:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:02.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:02.137 21:16:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:02.137 21:16:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:02.137 21:16:24 -- common/autotest_common.sh@10 -- # set +x 00:20:02.137 21:16:24 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:02.137 [2024-06-07 21:16:24.658242] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:02.137 [2024-06-07 21:16:24.658652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138226 ] 00:20:02.137 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:02.137 Zero copy mechanism will not be used. 00:20:02.396 [2024-06-07 21:16:24.822827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.396 [2024-06-07 21:16:24.919981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.396 [2024-06-07 21:16:24.996544] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.964 21:16:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:02.964 21:16:25 -- common/autotest_common.sh@852 -- # return 0 00:20:02.964 21:16:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:02.964 21:16:25 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:02.964 21:16:25 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:03.223 BaseBdev1 00:20:03.223 21:16:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:03.223 21:16:25 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:03.223 21:16:25 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:03.481 BaseBdev2 00:20:03.481 21:16:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:03.481 21:16:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:03.481 21:16:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:03.740 BaseBdev3 00:20:03.740 21:16:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:03.740 21:16:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:03.740 21:16:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:03.999 BaseBdev4 00:20:03.999 21:16:26 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:03.999 spare_malloc 00:20:04.258 21:16:26 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:04.258 spare_delay 00:20:04.258 21:16:26 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:04.517 [2024-06-07 21:16:27.050393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:04.517 [2024-06-07 21:16:27.050493] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.517 [2024-06-07 21:16:27.050542] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:04.517 [2024-06-07 21:16:27.050599] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.517 [2024-06-07 21:16:27.053542] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.517 [2024-06-07 21:16:27.053598] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:04.517 spare 00:20:04.517 21:16:27 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:04.776 [2024-06-07 21:16:27.250645] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.776 [2024-06-07 21:16:27.253263] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:04.776 [2024-06-07 21:16:27.253329] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:04.776 [2024-06-07 21:16:27.253378] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:04.776 [2024-06-07 21:16:27.253494] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:20:04.776 [2024-06-07 21:16:27.253510] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:04.776 [2024-06-07 21:16:27.253785] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:04.776 [2024-06-07 21:16:27.254238] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:20:04.776 [2024-06-07 21:16:27.254264] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:20:04.776 [2024-06-07 21:16:27.254515] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.776 21:16:27 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:04.776 21:16:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:04.776 21:16:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:04.776 21:16:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:04.776 21:16:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:04.776 21:16:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:04.776 21:16:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:04.776 21:16:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:04.776 21:16:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:04.776 21:16:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:04.776 21:16:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.776 21:16:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.035 21:16:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:05.035 "name": "raid_bdev1", 00:20:05.035 "uuid": "bd61c8b2-ba83-4990-854b-7d3330f0ed1d", 00:20:05.035 "strip_size_kb": 0, 00:20:05.035 "state": "online", 00:20:05.035 "raid_level": "raid1", 00:20:05.035 "superblock": false, 00:20:05.035 "num_base_bdevs": 4, 00:20:05.035 "num_base_bdevs_discovered": 4, 00:20:05.035 "num_base_bdevs_operational": 4, 00:20:05.035 "base_bdevs_list": [ 00:20:05.035 { 00:20:05.035 "name": "BaseBdev1", 00:20:05.035 "uuid": "877e8be0-b82a-4cf4-83be-7cac620f53d8", 00:20:05.035 "is_configured": true, 00:20:05.035 "data_offset": 0, 00:20:05.035 "data_size": 65536 00:20:05.035 }, 00:20:05.035 { 00:20:05.035 "name": "BaseBdev2", 00:20:05.035 "uuid": "f151dfe4-7731-439b-85b4-978d9ee2bae9", 00:20:05.035 "is_configured": true, 00:20:05.035 "data_offset": 0, 00:20:05.035 "data_size": 65536 00:20:05.035 }, 00:20:05.035 { 00:20:05.035 "name": "BaseBdev3", 00:20:05.035 "uuid": "bb6ebb33-b025-4b27-9f04-7f6f625992e8", 00:20:05.035 "is_configured": true, 00:20:05.035 "data_offset": 0, 00:20:05.035 "data_size": 65536 00:20:05.035 }, 00:20:05.035 { 00:20:05.035 "name": "BaseBdev4", 00:20:05.035 "uuid": "ad3f3558-4657-4dd4-a10c-0659d6d63094", 00:20:05.035 "is_configured": true, 00:20:05.035 "data_offset": 0, 00:20:05.035 "data_size": 65536 00:20:05.035 } 00:20:05.035 ] 00:20:05.035 }' 00:20:05.035 21:16:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:05.035 21:16:27 -- common/autotest_common.sh@10 -- # set +x 00:20:05.602 21:16:28 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:05.602 21:16:28 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:05.860 [2024-06-07 21:16:28.427304] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.860 21:16:28 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:05.860 21:16:28 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.860 21:16:28 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:06.120 21:16:28 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:06.121 21:16:28 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:06.121 21:16:28 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:06.121 21:16:28 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:06.121 21:16:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:06.121 21:16:28 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:06.121 21:16:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:06.121 21:16:28 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:06.121 21:16:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:06.121 21:16:28 -- bdev/nbd_common.sh@12 -- # local i 00:20:06.121 21:16:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:06.121 21:16:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:06.121 21:16:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:06.383 [2024-06-07 21:16:28.831063] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:06.383 /dev/nbd0 00:20:06.383 21:16:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:06.383 21:16:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:06.383 21:16:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:06.383 21:16:28 -- common/autotest_common.sh@857 -- # local i 00:20:06.383 21:16:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:06.383 21:16:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:06.383 21:16:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:06.383 21:16:28 -- common/autotest_common.sh@861 -- # break 00:20:06.383 21:16:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:06.383 21:16:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:06.383 21:16:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:06.383 1+0 records in 00:20:06.383 1+0 records out 00:20:06.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309533 s, 13.2 MB/s 00:20:06.383 21:16:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.383 21:16:28 -- common/autotest_common.sh@874 -- # size=4096 00:20:06.383 21:16:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.383 21:16:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:06.383 21:16:28 -- common/autotest_common.sh@877 -- # return 0 00:20:06.383 21:16:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:06.383 21:16:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:06.383 21:16:28 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:06.383 21:16:28 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:06.383 21:16:28 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:12.953 65536+0 records in 00:20:12.953 65536+0 records out 00:20:12.953 33554432 bytes (34 MB, 32 MiB) copied, 6.02246 s, 5.6 MB/s 00:20:12.953 21:16:34 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:12.953 21:16:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:12.953 21:16:34 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:12.953 21:16:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:12.953 21:16:34 -- bdev/nbd_common.sh@51 -- # local i 00:20:12.953 21:16:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:12.953 21:16:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:12.953 21:16:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:12.953 21:16:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:12.953 21:16:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:12.953 21:16:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:12.953 21:16:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:12.953 21:16:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:12.953 21:16:35 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:12.953 [2024-06-07 21:16:35.164322] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.953 21:16:35 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:12.953 21:16:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:12.953 21:16:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:12.953 21:16:35 -- bdev/nbd_common.sh@41 -- # break 00:20:12.953 21:16:35 -- bdev/nbd_common.sh@45 -- # return 0 00:20:12.953 21:16:35 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:12.953 [2024-06-07 21:16:35.520024] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:12.953 21:16:35 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:12.953 21:16:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:12.953 21:16:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:12.953 21:16:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:12.953 21:16:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:12.953 21:16:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:12.953 21:16:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:12.953 21:16:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:12.953 21:16:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:12.953 21:16:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:12.953 21:16:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.953 21:16:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.212 21:16:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:13.212 "name": "raid_bdev1", 00:20:13.212 "uuid": "bd61c8b2-ba83-4990-854b-7d3330f0ed1d", 00:20:13.212 "strip_size_kb": 0, 00:20:13.212 "state": "online", 00:20:13.212 "raid_level": "raid1", 00:20:13.212 "superblock": false, 00:20:13.212 "num_base_bdevs": 4, 00:20:13.212 "num_base_bdevs_discovered": 3, 00:20:13.212 "num_base_bdevs_operational": 3, 00:20:13.212 "base_bdevs_list": [ 00:20:13.212 { 00:20:13.212 "name": null, 00:20:13.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.212 "is_configured": false, 00:20:13.212 "data_offset": 0, 00:20:13.212 "data_size": 65536 00:20:13.212 }, 00:20:13.212 { 00:20:13.212 "name": "BaseBdev2", 00:20:13.212 "uuid": "f151dfe4-7731-439b-85b4-978d9ee2bae9", 00:20:13.212 "is_configured": true, 00:20:13.212 "data_offset": 0, 00:20:13.212 "data_size": 65536 00:20:13.212 }, 00:20:13.212 { 00:20:13.212 "name": "BaseBdev3", 00:20:13.212 "uuid": "bb6ebb33-b025-4b27-9f04-7f6f625992e8", 00:20:13.212 "is_configured": true, 00:20:13.212 "data_offset": 0, 00:20:13.212 "data_size": 65536 00:20:13.212 }, 00:20:13.212 { 00:20:13.212 "name": "BaseBdev4", 00:20:13.212 "uuid": "ad3f3558-4657-4dd4-a10c-0659d6d63094", 00:20:13.212 "is_configured": true, 00:20:13.212 "data_offset": 0, 00:20:13.212 "data_size": 65536 00:20:13.212 } 00:20:13.212 ] 00:20:13.212 }' 00:20:13.212 21:16:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:13.212 21:16:35 -- common/autotest_common.sh@10 -- # set +x 00:20:14.149 21:16:36 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:14.149 [2024-06-07 21:16:36.744445] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:14.149 [2024-06-07 21:16:36.744576] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:14.149 [2024-06-07 21:16:36.751456] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:20:14.149 [2024-06-07 21:16:36.754088] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:14.149 21:16:36 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:15.555 21:16:37 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.555 21:16:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:15.555 21:16:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:15.555 21:16:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:15.555 21:16:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:15.555 21:16:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.555 21:16:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.555 21:16:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:15.555 "name": "raid_bdev1", 00:20:15.555 "uuid": "bd61c8b2-ba83-4990-854b-7d3330f0ed1d", 00:20:15.555 "strip_size_kb": 0, 00:20:15.555 "state": "online", 00:20:15.555 "raid_level": "raid1", 00:20:15.555 "superblock": false, 00:20:15.555 "num_base_bdevs": 4, 00:20:15.555 "num_base_bdevs_discovered": 4, 00:20:15.555 "num_base_bdevs_operational": 4, 00:20:15.555 "process": { 00:20:15.555 "type": "rebuild", 00:20:15.555 "target": "spare", 00:20:15.555 "progress": { 00:20:15.555 "blocks": 24576, 00:20:15.555 "percent": 37 00:20:15.555 } 00:20:15.555 }, 00:20:15.555 "base_bdevs_list": [ 00:20:15.555 { 00:20:15.555 "name": "spare", 00:20:15.555 "uuid": "43b3278c-86a7-5b7f-8632-bbec5cde875d", 00:20:15.555 "is_configured": true, 00:20:15.555 "data_offset": 0, 00:20:15.555 "data_size": 65536 00:20:15.555 }, 00:20:15.555 { 00:20:15.555 "name": "BaseBdev2", 00:20:15.555 "uuid": "f151dfe4-7731-439b-85b4-978d9ee2bae9", 00:20:15.555 "is_configured": true, 00:20:15.555 "data_offset": 0, 00:20:15.555 "data_size": 65536 00:20:15.555 }, 00:20:15.555 { 00:20:15.555 "name": "BaseBdev3", 00:20:15.555 "uuid": "bb6ebb33-b025-4b27-9f04-7f6f625992e8", 00:20:15.555 "is_configured": true, 00:20:15.555 "data_offset": 0, 00:20:15.555 "data_size": 65536 00:20:15.555 }, 00:20:15.555 { 00:20:15.555 "name": "BaseBdev4", 00:20:15.555 "uuid": "ad3f3558-4657-4dd4-a10c-0659d6d63094", 00:20:15.555 "is_configured": true, 00:20:15.555 "data_offset": 0, 00:20:15.555 "data_size": 65536 00:20:15.555 } 00:20:15.555 ] 00:20:15.555 }' 00:20:15.555 21:16:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:15.555 21:16:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.555 21:16:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:15.555 21:16:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.555 21:16:38 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:15.813 [2024-06-07 21:16:38.339916] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:15.813 [2024-06-07 21:16:38.367914] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:15.813 [2024-06-07 21:16:38.368148] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.813 21:16:38 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:15.813 21:16:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:15.813 21:16:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:15.813 21:16:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:15.814 21:16:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:15.814 21:16:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:15.814 21:16:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:15.814 21:16:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:15.814 21:16:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:15.814 21:16:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:15.814 21:16:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.814 21:16:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.071 21:16:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:16.071 "name": "raid_bdev1", 00:20:16.071 "uuid": "bd61c8b2-ba83-4990-854b-7d3330f0ed1d", 00:20:16.071 "strip_size_kb": 0, 00:20:16.071 "state": "online", 00:20:16.071 "raid_level": "raid1", 00:20:16.071 "superblock": false, 00:20:16.071 "num_base_bdevs": 4, 00:20:16.071 "num_base_bdevs_discovered": 3, 00:20:16.071 "num_base_bdevs_operational": 3, 00:20:16.072 "base_bdevs_list": [ 00:20:16.072 { 00:20:16.072 "name": null, 00:20:16.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.072 "is_configured": false, 00:20:16.072 "data_offset": 0, 00:20:16.072 "data_size": 65536 00:20:16.072 }, 00:20:16.072 { 00:20:16.072 "name": "BaseBdev2", 00:20:16.072 "uuid": "f151dfe4-7731-439b-85b4-978d9ee2bae9", 00:20:16.072 "is_configured": true, 00:20:16.072 "data_offset": 0, 00:20:16.072 "data_size": 65536 00:20:16.072 }, 00:20:16.072 { 00:20:16.072 "name": "BaseBdev3", 00:20:16.072 "uuid": "bb6ebb33-b025-4b27-9f04-7f6f625992e8", 00:20:16.072 "is_configured": true, 00:20:16.072 "data_offset": 0, 00:20:16.072 "data_size": 65536 00:20:16.072 }, 00:20:16.072 { 00:20:16.072 "name": "BaseBdev4", 00:20:16.072 "uuid": "ad3f3558-4657-4dd4-a10c-0659d6d63094", 00:20:16.072 "is_configured": true, 00:20:16.072 "data_offset": 0, 00:20:16.072 "data_size": 65536 00:20:16.072 } 00:20:16.072 ] 00:20:16.072 }' 00:20:16.072 21:16:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:16.072 21:16:38 -- common/autotest_common.sh@10 -- # set +x 00:20:16.638 21:16:39 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:16.638 21:16:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:16.638 21:16:39 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:16.638 21:16:39 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:16.638 21:16:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:16.638 21:16:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.638 21:16:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.896 21:16:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:16.896 "name": "raid_bdev1", 00:20:16.896 "uuid": "bd61c8b2-ba83-4990-854b-7d3330f0ed1d", 00:20:16.896 "strip_size_kb": 0, 00:20:16.896 "state": "online", 00:20:16.896 "raid_level": "raid1", 00:20:16.896 "superblock": false, 00:20:16.896 "num_base_bdevs": 4, 00:20:16.896 "num_base_bdevs_discovered": 3, 00:20:16.896 "num_base_bdevs_operational": 3, 00:20:16.896 "base_bdevs_list": [ 00:20:16.896 { 00:20:16.896 "name": null, 00:20:16.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.896 "is_configured": false, 00:20:16.896 "data_offset": 0, 00:20:16.896 "data_size": 65536 00:20:16.896 }, 00:20:16.896 { 00:20:16.896 "name": "BaseBdev2", 00:20:16.896 "uuid": "f151dfe4-7731-439b-85b4-978d9ee2bae9", 00:20:16.896 "is_configured": true, 00:20:16.896 "data_offset": 0, 00:20:16.896 "data_size": 65536 00:20:16.896 }, 00:20:16.896 { 00:20:16.896 "name": "BaseBdev3", 00:20:16.896 "uuid": "bb6ebb33-b025-4b27-9f04-7f6f625992e8", 00:20:16.896 "is_configured": true, 00:20:16.896 "data_offset": 0, 00:20:16.896 "data_size": 65536 00:20:16.896 }, 00:20:16.896 { 00:20:16.896 "name": "BaseBdev4", 00:20:16.896 "uuid": "ad3f3558-4657-4dd4-a10c-0659d6d63094", 00:20:16.896 "is_configured": true, 00:20:16.896 "data_offset": 0, 00:20:16.896 "data_size": 65536 00:20:16.896 } 00:20:16.896 ] 00:20:16.896 }' 00:20:16.896 21:16:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:17.155 21:16:39 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:17.155 21:16:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:17.155 21:16:39 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:17.155 21:16:39 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:17.414 [2024-06-07 21:16:39.880203] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:17.414 [2024-06-07 21:16:39.880257] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:17.414 [2024-06-07 21:16:39.885743] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b840 00:20:17.414 [2024-06-07 21:16:39.887860] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:17.414 21:16:39 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:18.350 21:16:40 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.350 21:16:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:18.350 21:16:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:18.350 21:16:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:18.350 21:16:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:18.350 21:16:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.350 21:16:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.608 21:16:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:18.608 "name": "raid_bdev1", 00:20:18.608 "uuid": "bd61c8b2-ba83-4990-854b-7d3330f0ed1d", 00:20:18.609 "strip_size_kb": 0, 00:20:18.609 "state": "online", 00:20:18.609 "raid_level": "raid1", 00:20:18.609 "superblock": false, 00:20:18.609 "num_base_bdevs": 4, 00:20:18.609 "num_base_bdevs_discovered": 4, 00:20:18.609 "num_base_bdevs_operational": 4, 00:20:18.609 "process": { 00:20:18.609 "type": "rebuild", 00:20:18.609 "target": "spare", 00:20:18.609 "progress": { 00:20:18.609 "blocks": 24576, 00:20:18.609 "percent": 37 00:20:18.609 } 00:20:18.609 }, 00:20:18.609 "base_bdevs_list": [ 00:20:18.609 { 00:20:18.609 "name": "spare", 00:20:18.609 "uuid": "43b3278c-86a7-5b7f-8632-bbec5cde875d", 00:20:18.609 "is_configured": true, 00:20:18.609 "data_offset": 0, 00:20:18.609 "data_size": 65536 00:20:18.609 }, 00:20:18.609 { 00:20:18.609 "name": "BaseBdev2", 00:20:18.609 "uuid": "f151dfe4-7731-439b-85b4-978d9ee2bae9", 00:20:18.609 "is_configured": true, 00:20:18.609 "data_offset": 0, 00:20:18.609 "data_size": 65536 00:20:18.609 }, 00:20:18.609 { 00:20:18.609 "name": "BaseBdev3", 00:20:18.609 "uuid": "bb6ebb33-b025-4b27-9f04-7f6f625992e8", 00:20:18.609 "is_configured": true, 00:20:18.609 "data_offset": 0, 00:20:18.609 "data_size": 65536 00:20:18.609 }, 00:20:18.609 { 00:20:18.609 "name": "BaseBdev4", 00:20:18.609 "uuid": "ad3f3558-4657-4dd4-a10c-0659d6d63094", 00:20:18.609 "is_configured": true, 00:20:18.609 "data_offset": 0, 00:20:18.609 "data_size": 65536 00:20:18.609 } 00:20:18.609 ] 00:20:18.609 }' 00:20:18.609 21:16:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:18.609 21:16:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.609 21:16:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:18.609 21:16:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.609 21:16:41 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:18.609 21:16:41 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:20:18.609 21:16:41 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:18.609 21:16:41 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:20:18.609 21:16:41 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:18.867 [2024-06-07 21:16:41.479448] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:18.867 [2024-06-07 21:16:41.498123] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0b840 00:20:18.867 21:16:41 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:20:18.867 21:16:41 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:20:18.867 21:16:41 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.867 21:16:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:18.867 21:16:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:18.867 21:16:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:18.867 21:16:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:18.867 21:16:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.867 21:16:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.125 21:16:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:19.125 "name": "raid_bdev1", 00:20:19.125 "uuid": "bd61c8b2-ba83-4990-854b-7d3330f0ed1d", 00:20:19.125 "strip_size_kb": 0, 00:20:19.125 "state": "online", 00:20:19.125 "raid_level": "raid1", 00:20:19.125 "superblock": false, 00:20:19.125 "num_base_bdevs": 4, 00:20:19.125 "num_base_bdevs_discovered": 3, 00:20:19.125 "num_base_bdevs_operational": 3, 00:20:19.125 "process": { 00:20:19.125 "type": "rebuild", 00:20:19.125 "target": "spare", 00:20:19.125 "progress": { 00:20:19.125 "blocks": 36864, 00:20:19.125 "percent": 56 00:20:19.125 } 00:20:19.125 }, 00:20:19.125 "base_bdevs_list": [ 00:20:19.125 { 00:20:19.125 "name": "spare", 00:20:19.125 "uuid": "43b3278c-86a7-5b7f-8632-bbec5cde875d", 00:20:19.125 "is_configured": true, 00:20:19.125 "data_offset": 0, 00:20:19.125 "data_size": 65536 00:20:19.125 }, 00:20:19.125 { 00:20:19.125 "name": null, 00:20:19.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.125 "is_configured": false, 00:20:19.125 "data_offset": 0, 00:20:19.125 "data_size": 65536 00:20:19.125 }, 00:20:19.125 { 00:20:19.125 "name": "BaseBdev3", 00:20:19.125 "uuid": "bb6ebb33-b025-4b27-9f04-7f6f625992e8", 00:20:19.125 "is_configured": true, 00:20:19.125 "data_offset": 0, 00:20:19.125 "data_size": 65536 00:20:19.125 }, 00:20:19.125 { 00:20:19.125 "name": "BaseBdev4", 00:20:19.125 "uuid": "ad3f3558-4657-4dd4-a10c-0659d6d63094", 00:20:19.125 "is_configured": true, 00:20:19.125 "data_offset": 0, 00:20:19.125 "data_size": 65536 00:20:19.125 } 00:20:19.125 ] 00:20:19.125 }' 00:20:19.125 21:16:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:19.384 21:16:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:19.384 21:16:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:19.384 21:16:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.384 21:16:41 -- bdev/bdev_raid.sh@657 -- # local timeout=461 00:20:19.384 21:16:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:19.384 21:16:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.384 21:16:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:19.384 21:16:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:19.384 21:16:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:19.384 21:16:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:19.384 21:16:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.384 21:16:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.384 21:16:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:19.384 "name": "raid_bdev1", 00:20:19.384 "uuid": "bd61c8b2-ba83-4990-854b-7d3330f0ed1d", 00:20:19.384 "strip_size_kb": 0, 00:20:19.384 "state": "online", 00:20:19.384 "raid_level": "raid1", 00:20:19.384 "superblock": false, 00:20:19.384 "num_base_bdevs": 4, 00:20:19.384 "num_base_bdevs_discovered": 3, 00:20:19.384 "num_base_bdevs_operational": 3, 00:20:19.384 "process": { 00:20:19.384 "type": "rebuild", 00:20:19.384 "target": "spare", 00:20:19.384 "progress": { 00:20:19.384 "blocks": 43008, 00:20:19.384 "percent": 65 00:20:19.384 } 00:20:19.384 }, 00:20:19.384 "base_bdevs_list": [ 00:20:19.384 { 00:20:19.384 "name": "spare", 00:20:19.384 "uuid": "43b3278c-86a7-5b7f-8632-bbec5cde875d", 00:20:19.384 "is_configured": true, 00:20:19.384 "data_offset": 0, 00:20:19.384 "data_size": 65536 00:20:19.384 }, 00:20:19.384 { 00:20:19.384 "name": null, 00:20:19.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.384 "is_configured": false, 00:20:19.384 "data_offset": 0, 00:20:19.384 "data_size": 65536 00:20:19.384 }, 00:20:19.384 { 00:20:19.384 "name": "BaseBdev3", 00:20:19.384 "uuid": "bb6ebb33-b025-4b27-9f04-7f6f625992e8", 00:20:19.384 "is_configured": true, 00:20:19.384 "data_offset": 0, 00:20:19.384 "data_size": 65536 00:20:19.384 }, 00:20:19.384 { 00:20:19.384 "name": "BaseBdev4", 00:20:19.384 "uuid": "ad3f3558-4657-4dd4-a10c-0659d6d63094", 00:20:19.384 "is_configured": true, 00:20:19.384 "data_offset": 0, 00:20:19.384 "data_size": 65536 00:20:19.384 } 00:20:19.384 ] 00:20:19.384 }' 00:20:19.384 21:16:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:19.643 21:16:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:19.643 21:16:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:19.643 21:16:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.643 21:16:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:20.585 [2024-06-07 21:16:43.108199] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:20.585 [2024-06-07 21:16:43.108302] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:20.585 [2024-06-07 21:16:43.108395] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:20.585 21:16:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:20.585 21:16:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.585 21:16:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:20.585 21:16:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:20.585 21:16:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:20.585 21:16:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:20.585 21:16:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.585 21:16:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.850 21:16:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:20.850 "name": "raid_bdev1", 00:20:20.850 "uuid": "bd61c8b2-ba83-4990-854b-7d3330f0ed1d", 00:20:20.850 "strip_size_kb": 0, 00:20:20.850 "state": "online", 00:20:20.850 "raid_level": "raid1", 00:20:20.850 "superblock": false, 00:20:20.850 "num_base_bdevs": 4, 00:20:20.850 "num_base_bdevs_discovered": 3, 00:20:20.850 "num_base_bdevs_operational": 3, 00:20:20.850 "base_bdevs_list": [ 00:20:20.850 { 00:20:20.850 "name": "spare", 00:20:20.850 "uuid": "43b3278c-86a7-5b7f-8632-bbec5cde875d", 00:20:20.850 "is_configured": true, 00:20:20.850 "data_offset": 0, 00:20:20.850 "data_size": 65536 00:20:20.850 }, 00:20:20.850 { 00:20:20.850 "name": null, 00:20:20.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.850 "is_configured": false, 00:20:20.850 "data_offset": 0, 00:20:20.850 "data_size": 65536 00:20:20.850 }, 00:20:20.850 { 00:20:20.850 "name": "BaseBdev3", 00:20:20.850 "uuid": "bb6ebb33-b025-4b27-9f04-7f6f625992e8", 00:20:20.850 "is_configured": true, 00:20:20.850 "data_offset": 0, 00:20:20.850 "data_size": 65536 00:20:20.850 }, 00:20:20.850 { 00:20:20.850 "name": "BaseBdev4", 00:20:20.850 "uuid": "ad3f3558-4657-4dd4-a10c-0659d6d63094", 00:20:20.850 "is_configured": true, 00:20:20.850 "data_offset": 0, 00:20:20.850 "data_size": 65536 00:20:20.850 } 00:20:20.850 ] 00:20:20.850 }' 00:20:20.850 21:16:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:20.850 21:16:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:20.850 21:16:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:21.109 21:16:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:21.109 21:16:43 -- bdev/bdev_raid.sh@660 -- # break 00:20:21.109 21:16:43 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:21.109 21:16:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:21.109 21:16:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:21.109 21:16:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:21.109 21:16:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:21.109 21:16:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.109 21:16:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.109 21:16:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:21.109 "name": "raid_bdev1", 00:20:21.109 "uuid": "bd61c8b2-ba83-4990-854b-7d3330f0ed1d", 00:20:21.109 "strip_size_kb": 0, 00:20:21.109 "state": "online", 00:20:21.109 "raid_level": "raid1", 00:20:21.109 "superblock": false, 00:20:21.109 "num_base_bdevs": 4, 00:20:21.109 "num_base_bdevs_discovered": 3, 00:20:21.109 "num_base_bdevs_operational": 3, 00:20:21.109 "base_bdevs_list": [ 00:20:21.109 { 00:20:21.109 "name": "spare", 00:20:21.109 "uuid": "43b3278c-86a7-5b7f-8632-bbec5cde875d", 00:20:21.109 "is_configured": true, 00:20:21.109 "data_offset": 0, 00:20:21.109 "data_size": 65536 00:20:21.109 }, 00:20:21.109 { 00:20:21.109 "name": null, 00:20:21.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.109 "is_configured": false, 00:20:21.109 "data_offset": 0, 00:20:21.109 "data_size": 65536 00:20:21.109 }, 00:20:21.109 { 00:20:21.109 "name": "BaseBdev3", 00:20:21.109 "uuid": "bb6ebb33-b025-4b27-9f04-7f6f625992e8", 00:20:21.109 "is_configured": true, 00:20:21.109 "data_offset": 0, 00:20:21.109 "data_size": 65536 00:20:21.109 }, 00:20:21.109 { 00:20:21.109 "name": "BaseBdev4", 00:20:21.109 "uuid": "ad3f3558-4657-4dd4-a10c-0659d6d63094", 00:20:21.109 "is_configured": true, 00:20:21.109 "data_offset": 0, 00:20:21.109 "data_size": 65536 00:20:21.109 } 00:20:21.109 ] 00:20:21.109 }' 00:20:21.109 21:16:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:21.368 21:16:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:21.368 21:16:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:21.368 21:16:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:21.368 21:16:43 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:21.368 21:16:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:21.368 21:16:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:21.368 21:16:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:21.368 21:16:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:21.368 21:16:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:21.368 21:16:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:21.368 21:16:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:21.368 21:16:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:21.368 21:16:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:21.368 21:16:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.368 21:16:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.627 21:16:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:21.627 "name": "raid_bdev1", 00:20:21.627 "uuid": "bd61c8b2-ba83-4990-854b-7d3330f0ed1d", 00:20:21.627 "strip_size_kb": 0, 00:20:21.627 "state": "online", 00:20:21.627 "raid_level": "raid1", 00:20:21.627 "superblock": false, 00:20:21.627 "num_base_bdevs": 4, 00:20:21.627 "num_base_bdevs_discovered": 3, 00:20:21.627 "num_base_bdevs_operational": 3, 00:20:21.627 "base_bdevs_list": [ 00:20:21.627 { 00:20:21.627 "name": "spare", 00:20:21.627 "uuid": "43b3278c-86a7-5b7f-8632-bbec5cde875d", 00:20:21.627 "is_configured": true, 00:20:21.627 "data_offset": 0, 00:20:21.627 "data_size": 65536 00:20:21.627 }, 00:20:21.627 { 00:20:21.627 "name": null, 00:20:21.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.627 "is_configured": false, 00:20:21.627 "data_offset": 0, 00:20:21.627 "data_size": 65536 00:20:21.627 }, 00:20:21.627 { 00:20:21.627 "name": "BaseBdev3", 00:20:21.627 "uuid": "bb6ebb33-b025-4b27-9f04-7f6f625992e8", 00:20:21.627 "is_configured": true, 00:20:21.627 "data_offset": 0, 00:20:21.627 "data_size": 65536 00:20:21.627 }, 00:20:21.627 { 00:20:21.627 "name": "BaseBdev4", 00:20:21.627 "uuid": "ad3f3558-4657-4dd4-a10c-0659d6d63094", 00:20:21.627 "is_configured": true, 00:20:21.627 "data_offset": 0, 00:20:21.627 "data_size": 65536 00:20:21.627 } 00:20:21.627 ] 00:20:21.627 }' 00:20:21.627 21:16:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:21.627 21:16:44 -- common/autotest_common.sh@10 -- # set +x 00:20:22.194 21:16:44 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:22.453 [2024-06-07 21:16:44.953907] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.453 [2024-06-07 21:16:44.953948] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.453 [2024-06-07 21:16:44.954091] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.453 [2024-06-07 21:16:44.954203] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.453 [2024-06-07 21:16:44.954232] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:20:22.453 21:16:44 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.453 21:16:44 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:22.711 21:16:45 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:22.711 21:16:45 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:22.711 21:16:45 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:22.711 21:16:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:22.711 21:16:45 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:22.711 21:16:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:22.711 21:16:45 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:22.711 21:16:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:22.711 21:16:45 -- bdev/nbd_common.sh@12 -- # local i 00:20:22.711 21:16:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:22.711 21:16:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:22.711 21:16:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:23.061 /dev/nbd0 00:20:23.061 21:16:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:23.061 21:16:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:23.061 21:16:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:23.061 21:16:45 -- common/autotest_common.sh@857 -- # local i 00:20:23.061 21:16:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:23.061 21:16:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:23.061 21:16:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:23.061 21:16:45 -- common/autotest_common.sh@861 -- # break 00:20:23.061 21:16:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:23.061 21:16:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:23.061 21:16:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.061 1+0 records in 00:20:23.061 1+0 records out 00:20:23.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188028 s, 21.8 MB/s 00:20:23.061 21:16:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.061 21:16:45 -- common/autotest_common.sh@874 -- # size=4096 00:20:23.061 21:16:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.061 21:16:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:23.061 21:16:45 -- common/autotest_common.sh@877 -- # return 0 00:20:23.061 21:16:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:23.061 21:16:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:23.061 21:16:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:23.061 /dev/nbd1 00:20:23.061 21:16:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:23.061 21:16:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:23.061 21:16:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:23.061 21:16:45 -- common/autotest_common.sh@857 -- # local i 00:20:23.061 21:16:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:23.061 21:16:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:23.061 21:16:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:23.061 21:16:45 -- common/autotest_common.sh@861 -- # break 00:20:23.061 21:16:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:23.062 21:16:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:23.062 21:16:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.062 1+0 records in 00:20:23.062 1+0 records out 00:20:23.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267065 s, 15.3 MB/s 00:20:23.062 21:16:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.062 21:16:45 -- common/autotest_common.sh@874 -- # size=4096 00:20:23.062 21:16:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.062 21:16:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:23.062 21:16:45 -- common/autotest_common.sh@877 -- # return 0 00:20:23.062 21:16:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:23.062 21:16:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:23.062 21:16:45 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:23.320 21:16:45 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:23.320 21:16:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:23.320 21:16:45 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:23.320 21:16:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:23.320 21:16:45 -- bdev/nbd_common.sh@51 -- # local i 00:20:23.320 21:16:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:23.320 21:16:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:23.320 21:16:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:23.320 21:16:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:23.320 21:16:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:23.320 21:16:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:23.320 21:16:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:23.320 21:16:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:23.577 21:16:45 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:23.577 21:16:46 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:23.577 21:16:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:23.577 21:16:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:23.577 21:16:46 -- bdev/nbd_common.sh@41 -- # break 00:20:23.577 21:16:46 -- bdev/nbd_common.sh@45 -- # return 0 00:20:23.577 21:16:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:23.577 21:16:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:23.835 21:16:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:23.835 21:16:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:23.835 21:16:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:23.835 21:16:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:23.835 21:16:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:23.835 21:16:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:23.835 21:16:46 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:23.835 21:16:46 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:23.835 21:16:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:23.835 21:16:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:23.835 21:16:46 -- bdev/nbd_common.sh@41 -- # break 00:20:23.835 21:16:46 -- bdev/nbd_common.sh@45 -- # return 0 00:20:23.835 21:16:46 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:23.835 21:16:46 -- bdev/bdev_raid.sh@709 -- # killprocess 138226 00:20:23.835 21:16:46 -- common/autotest_common.sh@926 -- # '[' -z 138226 ']' 00:20:23.835 21:16:46 -- common/autotest_common.sh@930 -- # kill -0 138226 00:20:23.835 21:16:46 -- common/autotest_common.sh@931 -- # uname 00:20:23.835 21:16:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:23.835 21:16:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138226 00:20:23.835 21:16:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:23.835 21:16:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:23.835 killing process with pid 138226 00:20:23.835 Received shutdown signal, test time was about 60.000000 seconds 00:20:23.835 00:20:23.835 Latency(us) 00:20:23.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.835 =================================================================================================================== 00:20:23.835 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:23.835 21:16:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138226' 00:20:23.835 21:16:46 -- common/autotest_common.sh@945 -- # kill 138226 00:20:23.835 21:16:46 -- common/autotest_common.sh@950 -- # wait 138226 00:20:23.835 [2024-06-07 21:16:46.493535] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:24.093 [2024-06-07 21:16:46.550590] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:24.351 00:20:24.351 real 0m22.268s 00:20:24.351 user 0m30.771s 00:20:24.351 sys 0m3.714s 00:20:24.351 21:16:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.351 ************************************ 00:20:24.351 END TEST raid_rebuild_test 00:20:24.351 ************************************ 00:20:24.351 21:16:46 -- common/autotest_common.sh@10 -- # set +x 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:20:24.351 21:16:46 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:24.351 21:16:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:24.351 21:16:46 -- common/autotest_common.sh@10 -- # set +x 00:20:24.351 ************************************ 00:20:24.351 START TEST raid_rebuild_test_sb 00:20:24.351 ************************************ 00:20:24.351 21:16:46 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:24.351 21:16:46 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:24.352 21:16:46 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:24.352 21:16:46 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:24.352 21:16:46 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:24.352 21:16:46 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:24.352 21:16:46 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:24.352 21:16:46 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:24.352 21:16:46 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:24.352 21:16:46 -- bdev/bdev_raid.sh@544 -- # raid_pid=138809 00:20:24.352 21:16:46 -- bdev/bdev_raid.sh@545 -- # waitforlisten 138809 /var/tmp/spdk-raid.sock 00:20:24.352 21:16:46 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:24.352 21:16:46 -- common/autotest_common.sh@819 -- # '[' -z 138809 ']' 00:20:24.352 21:16:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:24.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:24.352 21:16:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:24.352 21:16:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:24.352 21:16:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:24.352 21:16:46 -- common/autotest_common.sh@10 -- # set +x 00:20:24.352 [2024-06-07 21:16:46.980028] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:24.352 [2024-06-07 21:16:46.980260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138809 ] 00:20:24.352 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:24.352 Zero copy mechanism will not be used. 00:20:24.610 [2024-06-07 21:16:47.131024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.610 [2024-06-07 21:16:47.222824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.868 [2024-06-07 21:16:47.294339] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:25.434 21:16:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:25.434 21:16:47 -- common/autotest_common.sh@852 -- # return 0 00:20:25.434 21:16:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:25.434 21:16:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:25.434 21:16:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:25.692 BaseBdev1_malloc 00:20:25.692 21:16:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:25.950 [2024-06-07 21:16:48.394844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:25.950 [2024-06-07 21:16:48.395036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.950 [2024-06-07 21:16:48.395086] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:25.950 [2024-06-07 21:16:48.395143] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.950 [2024-06-07 21:16:48.398492] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.950 [2024-06-07 21:16:48.398565] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:25.950 BaseBdev1 00:20:25.950 21:16:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:25.950 21:16:48 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:25.950 21:16:48 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:26.209 BaseBdev2_malloc 00:20:26.209 21:16:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:26.209 [2024-06-07 21:16:48.827224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:26.209 [2024-06-07 21:16:48.827413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.209 [2024-06-07 21:16:48.827474] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:26.209 [2024-06-07 21:16:48.827544] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.209 [2024-06-07 21:16:48.830452] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.209 [2024-06-07 21:16:48.830524] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:26.209 BaseBdev2 00:20:26.209 21:16:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:26.209 21:16:48 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:26.209 21:16:48 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:26.469 BaseBdev3_malloc 00:20:26.469 21:16:49 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:26.728 [2024-06-07 21:16:49.290780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:26.728 [2024-06-07 21:16:49.290894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.728 [2024-06-07 21:16:49.290942] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:26.728 [2024-06-07 21:16:49.291012] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.728 [2024-06-07 21:16:49.293550] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.728 [2024-06-07 21:16:49.293629] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:26.728 BaseBdev3 00:20:26.728 21:16:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:26.728 21:16:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:26.728 21:16:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:26.986 BaseBdev4_malloc 00:20:26.986 21:16:49 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:27.245 [2024-06-07 21:16:49.701204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:27.245 [2024-06-07 21:16:49.701336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.245 [2024-06-07 21:16:49.701394] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:27.245 [2024-06-07 21:16:49.701446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.245 [2024-06-07 21:16:49.704064] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.245 [2024-06-07 21:16:49.704142] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:27.245 BaseBdev4 00:20:27.245 21:16:49 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:27.245 spare_malloc 00:20:27.504 21:16:49 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:27.504 spare_delay 00:20:27.504 21:16:50 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:27.763 [2024-06-07 21:16:50.323587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:27.763 [2024-06-07 21:16:50.323783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.763 [2024-06-07 21:16:50.323841] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:27.763 [2024-06-07 21:16:50.323914] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.763 [2024-06-07 21:16:50.327056] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.763 [2024-06-07 21:16:50.327143] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:27.763 spare 00:20:27.763 21:16:50 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:28.022 [2024-06-07 21:16:50.519809] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:28.022 [2024-06-07 21:16:50.522247] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:28.022 [2024-06-07 21:16:50.522372] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:28.022 [2024-06-07 21:16:50.522437] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:28.022 [2024-06-07 21:16:50.522771] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:20:28.022 [2024-06-07 21:16:50.522800] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:28.022 [2024-06-07 21:16:50.523022] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:28.022 [2024-06-07 21:16:50.523633] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:20:28.022 [2024-06-07 21:16:50.523662] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:20:28.022 [2024-06-07 21:16:50.523931] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.022 21:16:50 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:28.022 21:16:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:28.022 21:16:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:28.022 21:16:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:28.022 21:16:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:28.022 21:16:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:28.022 21:16:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:28.022 21:16:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:28.022 21:16:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:28.022 21:16:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:28.022 21:16:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.022 21:16:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.280 21:16:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:28.280 "name": "raid_bdev1", 00:20:28.280 "uuid": "440cd4cb-4c58-415c-b6ec-9e4298dc10f3", 00:20:28.280 "strip_size_kb": 0, 00:20:28.280 "state": "online", 00:20:28.280 "raid_level": "raid1", 00:20:28.280 "superblock": true, 00:20:28.280 "num_base_bdevs": 4, 00:20:28.280 "num_base_bdevs_discovered": 4, 00:20:28.280 "num_base_bdevs_operational": 4, 00:20:28.280 "base_bdevs_list": [ 00:20:28.280 { 00:20:28.280 "name": "BaseBdev1", 00:20:28.280 "uuid": "7d444276-003e-5a99-be54-40a2d901c671", 00:20:28.280 "is_configured": true, 00:20:28.280 "data_offset": 2048, 00:20:28.280 "data_size": 63488 00:20:28.280 }, 00:20:28.280 { 00:20:28.280 "name": "BaseBdev2", 00:20:28.280 "uuid": "b02fbc8e-099a-5699-b95c-2d9b3e9b926f", 00:20:28.280 "is_configured": true, 00:20:28.280 "data_offset": 2048, 00:20:28.280 "data_size": 63488 00:20:28.280 }, 00:20:28.280 { 00:20:28.280 "name": "BaseBdev3", 00:20:28.280 "uuid": "1300d842-f431-5fcd-8bf7-e42ad90e363a", 00:20:28.280 "is_configured": true, 00:20:28.280 "data_offset": 2048, 00:20:28.280 "data_size": 63488 00:20:28.280 }, 00:20:28.280 { 00:20:28.280 "name": "BaseBdev4", 00:20:28.280 "uuid": "b6343ff9-521c-5dce-b2e7-6e483715c111", 00:20:28.280 "is_configured": true, 00:20:28.280 "data_offset": 2048, 00:20:28.280 "data_size": 63488 00:20:28.280 } 00:20:28.280 ] 00:20:28.280 }' 00:20:28.280 21:16:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:28.280 21:16:50 -- common/autotest_common.sh@10 -- # set +x 00:20:28.847 21:16:51 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:28.847 21:16:51 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:29.105 [2024-06-07 21:16:51.708416] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:29.105 21:16:51 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:29.105 21:16:51 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.105 21:16:51 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:29.364 21:16:51 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:29.364 21:16:51 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:29.364 21:16:51 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:29.364 21:16:51 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:29.364 21:16:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:29.364 21:16:51 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:29.364 21:16:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:29.364 21:16:51 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:29.364 21:16:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:29.364 21:16:51 -- bdev/nbd_common.sh@12 -- # local i 00:20:29.364 21:16:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:29.364 21:16:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:29.364 21:16:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:29.622 [2024-06-07 21:16:52.168367] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:29.622 /dev/nbd0 00:20:29.622 21:16:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:29.622 21:16:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:29.622 21:16:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:29.622 21:16:52 -- common/autotest_common.sh@857 -- # local i 00:20:29.622 21:16:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:29.622 21:16:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:29.622 21:16:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:29.622 21:16:52 -- common/autotest_common.sh@861 -- # break 00:20:29.622 21:16:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:29.622 21:16:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:29.622 21:16:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:29.622 1+0 records in 00:20:29.622 1+0 records out 00:20:29.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288892 s, 14.2 MB/s 00:20:29.622 21:16:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.622 21:16:52 -- common/autotest_common.sh@874 -- # size=4096 00:20:29.622 21:16:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.622 21:16:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:29.622 21:16:52 -- common/autotest_common.sh@877 -- # return 0 00:20:29.622 21:16:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:29.622 21:16:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:29.622 21:16:52 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:29.622 21:16:52 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:29.622 21:16:52 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:37.745 63488+0 records in 00:20:37.745 63488+0 records out 00:20:37.745 32505856 bytes (33 MB, 31 MiB) copied, 7.14843 s, 4.5 MB/s 00:20:37.745 21:16:59 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@51 -- # local i 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:37.745 [2024-06-07 21:16:59.592639] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@41 -- # break 00:20:37.745 21:16:59 -- bdev/nbd_common.sh@45 -- # return 0 00:20:37.745 21:16:59 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:37.745 [2024-06-07 21:16:59.920435] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:37.745 21:16:59 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:37.745 21:16:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:37.745 21:16:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:37.745 21:16:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:37.745 21:16:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:37.745 21:16:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:37.745 21:16:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:37.745 21:16:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:37.745 21:16:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:37.745 21:16:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:37.745 21:16:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.745 21:16:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.745 21:17:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:37.745 "name": "raid_bdev1", 00:20:37.745 "uuid": "440cd4cb-4c58-415c-b6ec-9e4298dc10f3", 00:20:37.745 "strip_size_kb": 0, 00:20:37.745 "state": "online", 00:20:37.745 "raid_level": "raid1", 00:20:37.745 "superblock": true, 00:20:37.745 "num_base_bdevs": 4, 00:20:37.745 "num_base_bdevs_discovered": 3, 00:20:37.745 "num_base_bdevs_operational": 3, 00:20:37.745 "base_bdevs_list": [ 00:20:37.745 { 00:20:37.745 "name": null, 00:20:37.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.745 "is_configured": false, 00:20:37.745 "data_offset": 2048, 00:20:37.745 "data_size": 63488 00:20:37.745 }, 00:20:37.745 { 00:20:37.745 "name": "BaseBdev2", 00:20:37.745 "uuid": "b02fbc8e-099a-5699-b95c-2d9b3e9b926f", 00:20:37.745 "is_configured": true, 00:20:37.745 "data_offset": 2048, 00:20:37.745 "data_size": 63488 00:20:37.745 }, 00:20:37.745 { 00:20:37.745 "name": "BaseBdev3", 00:20:37.745 "uuid": "1300d842-f431-5fcd-8bf7-e42ad90e363a", 00:20:37.745 "is_configured": true, 00:20:37.745 "data_offset": 2048, 00:20:37.745 "data_size": 63488 00:20:37.745 }, 00:20:37.745 { 00:20:37.745 "name": "BaseBdev4", 00:20:37.745 "uuid": "b6343ff9-521c-5dce-b2e7-6e483715c111", 00:20:37.745 "is_configured": true, 00:20:37.745 "data_offset": 2048, 00:20:37.745 "data_size": 63488 00:20:37.745 } 00:20:37.745 ] 00:20:37.745 }' 00:20:37.745 21:17:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:37.745 21:17:00 -- common/autotest_common.sh@10 -- # set +x 00:20:38.312 21:17:00 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:38.576 [2024-06-07 21:17:01.100693] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:38.576 [2024-06-07 21:17:01.100790] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:38.576 [2024-06-07 21:17:01.105235] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5170 00:20:38.576 [2024-06-07 21:17:01.107368] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:38.576 21:17:01 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:39.513 21:17:02 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.513 21:17:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:39.513 21:17:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:39.513 21:17:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:39.513 21:17:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:39.513 21:17:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.513 21:17:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.773 21:17:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:39.773 "name": "raid_bdev1", 00:20:39.773 "uuid": "440cd4cb-4c58-415c-b6ec-9e4298dc10f3", 00:20:39.773 "strip_size_kb": 0, 00:20:39.773 "state": "online", 00:20:39.773 "raid_level": "raid1", 00:20:39.773 "superblock": true, 00:20:39.773 "num_base_bdevs": 4, 00:20:39.773 "num_base_bdevs_discovered": 4, 00:20:39.773 "num_base_bdevs_operational": 4, 00:20:39.773 "process": { 00:20:39.773 "type": "rebuild", 00:20:39.773 "target": "spare", 00:20:39.773 "progress": { 00:20:39.773 "blocks": 24576, 00:20:39.773 "percent": 38 00:20:39.773 } 00:20:39.773 }, 00:20:39.773 "base_bdevs_list": [ 00:20:39.773 { 00:20:39.773 "name": "spare", 00:20:39.773 "uuid": "a01129fa-d230-57c7-9ac4-e82e26225e7e", 00:20:39.773 "is_configured": true, 00:20:39.773 "data_offset": 2048, 00:20:39.773 "data_size": 63488 00:20:39.773 }, 00:20:39.773 { 00:20:39.773 "name": "BaseBdev2", 00:20:39.773 "uuid": "b02fbc8e-099a-5699-b95c-2d9b3e9b926f", 00:20:39.773 "is_configured": true, 00:20:39.773 "data_offset": 2048, 00:20:39.773 "data_size": 63488 00:20:39.773 }, 00:20:39.773 { 00:20:39.773 "name": "BaseBdev3", 00:20:39.773 "uuid": "1300d842-f431-5fcd-8bf7-e42ad90e363a", 00:20:39.773 "is_configured": true, 00:20:39.773 "data_offset": 2048, 00:20:39.773 "data_size": 63488 00:20:39.773 }, 00:20:39.773 { 00:20:39.773 "name": "BaseBdev4", 00:20:39.773 "uuid": "b6343ff9-521c-5dce-b2e7-6e483715c111", 00:20:39.773 "is_configured": true, 00:20:39.773 "data_offset": 2048, 00:20:39.773 "data_size": 63488 00:20:39.773 } 00:20:39.773 ] 00:20:39.773 }' 00:20:39.773 21:17:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:39.773 21:17:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:39.773 21:17:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:40.031 21:17:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.031 21:17:02 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:40.289 [2024-06-07 21:17:02.706739] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:40.289 [2024-06-07 21:17:02.717624] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:40.289 [2024-06-07 21:17:02.717732] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.289 21:17:02 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:40.289 21:17:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:40.289 21:17:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:40.289 21:17:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:40.289 21:17:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:40.289 21:17:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:40.289 21:17:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:40.289 21:17:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:40.289 21:17:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:40.289 21:17:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:40.289 21:17:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.289 21:17:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.289 21:17:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:40.289 "name": "raid_bdev1", 00:20:40.289 "uuid": "440cd4cb-4c58-415c-b6ec-9e4298dc10f3", 00:20:40.289 "strip_size_kb": 0, 00:20:40.289 "state": "online", 00:20:40.289 "raid_level": "raid1", 00:20:40.289 "superblock": true, 00:20:40.289 "num_base_bdevs": 4, 00:20:40.289 "num_base_bdevs_discovered": 3, 00:20:40.289 "num_base_bdevs_operational": 3, 00:20:40.289 "base_bdevs_list": [ 00:20:40.289 { 00:20:40.289 "name": null, 00:20:40.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.289 "is_configured": false, 00:20:40.289 "data_offset": 2048, 00:20:40.289 "data_size": 63488 00:20:40.289 }, 00:20:40.289 { 00:20:40.289 "name": "BaseBdev2", 00:20:40.289 "uuid": "b02fbc8e-099a-5699-b95c-2d9b3e9b926f", 00:20:40.289 "is_configured": true, 00:20:40.290 "data_offset": 2048, 00:20:40.290 "data_size": 63488 00:20:40.290 }, 00:20:40.290 { 00:20:40.290 "name": "BaseBdev3", 00:20:40.290 "uuid": "1300d842-f431-5fcd-8bf7-e42ad90e363a", 00:20:40.290 "is_configured": true, 00:20:40.290 "data_offset": 2048, 00:20:40.290 "data_size": 63488 00:20:40.290 }, 00:20:40.290 { 00:20:40.290 "name": "BaseBdev4", 00:20:40.290 "uuid": "b6343ff9-521c-5dce-b2e7-6e483715c111", 00:20:40.290 "is_configured": true, 00:20:40.290 "data_offset": 2048, 00:20:40.290 "data_size": 63488 00:20:40.290 } 00:20:40.290 ] 00:20:40.290 }' 00:20:40.290 21:17:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:40.290 21:17:02 -- common/autotest_common.sh@10 -- # set +x 00:20:41.225 21:17:03 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:41.226 21:17:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:41.226 21:17:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:41.226 21:17:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:41.226 21:17:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:41.226 21:17:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.226 21:17:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.226 21:17:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:41.226 "name": "raid_bdev1", 00:20:41.226 "uuid": "440cd4cb-4c58-415c-b6ec-9e4298dc10f3", 00:20:41.226 "strip_size_kb": 0, 00:20:41.226 "state": "online", 00:20:41.226 "raid_level": "raid1", 00:20:41.226 "superblock": true, 00:20:41.226 "num_base_bdevs": 4, 00:20:41.226 "num_base_bdevs_discovered": 3, 00:20:41.226 "num_base_bdevs_operational": 3, 00:20:41.226 "base_bdevs_list": [ 00:20:41.226 { 00:20:41.226 "name": null, 00:20:41.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.226 "is_configured": false, 00:20:41.226 "data_offset": 2048, 00:20:41.226 "data_size": 63488 00:20:41.226 }, 00:20:41.226 { 00:20:41.226 "name": "BaseBdev2", 00:20:41.226 "uuid": "b02fbc8e-099a-5699-b95c-2d9b3e9b926f", 00:20:41.226 "is_configured": true, 00:20:41.226 "data_offset": 2048, 00:20:41.226 "data_size": 63488 00:20:41.226 }, 00:20:41.226 { 00:20:41.226 "name": "BaseBdev3", 00:20:41.226 "uuid": "1300d842-f431-5fcd-8bf7-e42ad90e363a", 00:20:41.226 "is_configured": true, 00:20:41.226 "data_offset": 2048, 00:20:41.226 "data_size": 63488 00:20:41.226 }, 00:20:41.226 { 00:20:41.226 "name": "BaseBdev4", 00:20:41.226 "uuid": "b6343ff9-521c-5dce-b2e7-6e483715c111", 00:20:41.226 "is_configured": true, 00:20:41.226 "data_offset": 2048, 00:20:41.226 "data_size": 63488 00:20:41.226 } 00:20:41.226 ] 00:20:41.226 }' 00:20:41.226 21:17:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:41.226 21:17:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:41.226 21:17:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:41.484 21:17:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:41.484 21:17:03 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:41.743 [2024-06-07 21:17:04.182836] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:41.743 [2024-06-07 21:17:04.182900] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:41.743 [2024-06-07 21:17:04.187032] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5310 00:20:41.743 [2024-06-07 21:17:04.189160] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:41.743 21:17:04 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:42.679 21:17:05 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:42.679 21:17:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:42.679 21:17:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:42.679 21:17:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:42.679 21:17:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:42.679 21:17:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.679 21:17:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.937 21:17:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:42.937 "name": "raid_bdev1", 00:20:42.937 "uuid": "440cd4cb-4c58-415c-b6ec-9e4298dc10f3", 00:20:42.937 "strip_size_kb": 0, 00:20:42.937 "state": "online", 00:20:42.937 "raid_level": "raid1", 00:20:42.937 "superblock": true, 00:20:42.937 "num_base_bdevs": 4, 00:20:42.937 "num_base_bdevs_discovered": 4, 00:20:42.937 "num_base_bdevs_operational": 4, 00:20:42.937 "process": { 00:20:42.937 "type": "rebuild", 00:20:42.937 "target": "spare", 00:20:42.937 "progress": { 00:20:42.937 "blocks": 24576, 00:20:42.937 "percent": 38 00:20:42.937 } 00:20:42.937 }, 00:20:42.937 "base_bdevs_list": [ 00:20:42.937 { 00:20:42.937 "name": "spare", 00:20:42.937 "uuid": "a01129fa-d230-57c7-9ac4-e82e26225e7e", 00:20:42.937 "is_configured": true, 00:20:42.937 "data_offset": 2048, 00:20:42.937 "data_size": 63488 00:20:42.937 }, 00:20:42.937 { 00:20:42.937 "name": "BaseBdev2", 00:20:42.937 "uuid": "b02fbc8e-099a-5699-b95c-2d9b3e9b926f", 00:20:42.937 "is_configured": true, 00:20:42.937 "data_offset": 2048, 00:20:42.937 "data_size": 63488 00:20:42.937 }, 00:20:42.937 { 00:20:42.937 "name": "BaseBdev3", 00:20:42.937 "uuid": "1300d842-f431-5fcd-8bf7-e42ad90e363a", 00:20:42.937 "is_configured": true, 00:20:42.937 "data_offset": 2048, 00:20:42.937 "data_size": 63488 00:20:42.937 }, 00:20:42.937 { 00:20:42.937 "name": "BaseBdev4", 00:20:42.937 "uuid": "b6343ff9-521c-5dce-b2e7-6e483715c111", 00:20:42.937 "is_configured": true, 00:20:42.937 "data_offset": 2048, 00:20:42.937 "data_size": 63488 00:20:42.937 } 00:20:42.937 ] 00:20:42.937 }' 00:20:42.937 21:17:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:42.937 21:17:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:42.937 21:17:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:42.937 21:17:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:42.937 21:17:05 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:42.937 21:17:05 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:42.937 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:42.937 21:17:05 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:20:42.937 21:17:05 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:42.937 21:17:05 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:20:42.937 21:17:05 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:43.196 [2024-06-07 21:17:05.796495] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:43.196 [2024-06-07 21:17:05.797988] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca5310 00:20:43.455 21:17:05 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:20:43.455 21:17:05 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:20:43.455 21:17:05 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:43.455 21:17:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:43.455 21:17:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:43.455 21:17:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:43.455 21:17:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:43.455 21:17:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.455 21:17:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.715 21:17:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:43.715 "name": "raid_bdev1", 00:20:43.715 "uuid": "440cd4cb-4c58-415c-b6ec-9e4298dc10f3", 00:20:43.715 "strip_size_kb": 0, 00:20:43.715 "state": "online", 00:20:43.715 "raid_level": "raid1", 00:20:43.715 "superblock": true, 00:20:43.715 "num_base_bdevs": 4, 00:20:43.715 "num_base_bdevs_discovered": 3, 00:20:43.715 "num_base_bdevs_operational": 3, 00:20:43.715 "process": { 00:20:43.715 "type": "rebuild", 00:20:43.715 "target": "spare", 00:20:43.715 "progress": { 00:20:43.715 "blocks": 38912, 00:20:43.715 "percent": 61 00:20:43.715 } 00:20:43.715 }, 00:20:43.715 "base_bdevs_list": [ 00:20:43.715 { 00:20:43.715 "name": "spare", 00:20:43.715 "uuid": "a01129fa-d230-57c7-9ac4-e82e26225e7e", 00:20:43.715 "is_configured": true, 00:20:43.715 "data_offset": 2048, 00:20:43.715 "data_size": 63488 00:20:43.715 }, 00:20:43.715 { 00:20:43.715 "name": null, 00:20:43.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.715 "is_configured": false, 00:20:43.715 "data_offset": 2048, 00:20:43.715 "data_size": 63488 00:20:43.715 }, 00:20:43.715 { 00:20:43.715 "name": "BaseBdev3", 00:20:43.715 "uuid": "1300d842-f431-5fcd-8bf7-e42ad90e363a", 00:20:43.715 "is_configured": true, 00:20:43.715 "data_offset": 2048, 00:20:43.715 "data_size": 63488 00:20:43.715 }, 00:20:43.715 { 00:20:43.715 "name": "BaseBdev4", 00:20:43.715 "uuid": "b6343ff9-521c-5dce-b2e7-6e483715c111", 00:20:43.715 "is_configured": true, 00:20:43.715 "data_offset": 2048, 00:20:43.715 "data_size": 63488 00:20:43.715 } 00:20:43.715 ] 00:20:43.715 }' 00:20:43.715 21:17:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:43.715 21:17:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:43.715 21:17:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:43.715 21:17:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:43.715 21:17:06 -- bdev/bdev_raid.sh@657 -- # local timeout=486 00:20:43.715 21:17:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:43.715 21:17:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:43.715 21:17:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:43.715 21:17:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:43.715 21:17:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:43.715 21:17:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:43.715 21:17:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.715 21:17:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.973 21:17:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:43.973 "name": "raid_bdev1", 00:20:43.973 "uuid": "440cd4cb-4c58-415c-b6ec-9e4298dc10f3", 00:20:43.973 "strip_size_kb": 0, 00:20:43.973 "state": "online", 00:20:43.973 "raid_level": "raid1", 00:20:43.973 "superblock": true, 00:20:43.973 "num_base_bdevs": 4, 00:20:43.973 "num_base_bdevs_discovered": 3, 00:20:43.973 "num_base_bdevs_operational": 3, 00:20:43.973 "process": { 00:20:43.974 "type": "rebuild", 00:20:43.974 "target": "spare", 00:20:43.974 "progress": { 00:20:43.974 "blocks": 47104, 00:20:43.974 "percent": 74 00:20:43.974 } 00:20:43.974 }, 00:20:43.974 "base_bdevs_list": [ 00:20:43.974 { 00:20:43.974 "name": "spare", 00:20:43.974 "uuid": "a01129fa-d230-57c7-9ac4-e82e26225e7e", 00:20:43.974 "is_configured": true, 00:20:43.974 "data_offset": 2048, 00:20:43.974 "data_size": 63488 00:20:43.974 }, 00:20:43.974 { 00:20:43.974 "name": null, 00:20:43.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.974 "is_configured": false, 00:20:43.974 "data_offset": 2048, 00:20:43.974 "data_size": 63488 00:20:43.974 }, 00:20:43.974 { 00:20:43.974 "name": "BaseBdev3", 00:20:43.974 "uuid": "1300d842-f431-5fcd-8bf7-e42ad90e363a", 00:20:43.974 "is_configured": true, 00:20:43.974 "data_offset": 2048, 00:20:43.974 "data_size": 63488 00:20:43.974 }, 00:20:43.974 { 00:20:43.974 "name": "BaseBdev4", 00:20:43.974 "uuid": "b6343ff9-521c-5dce-b2e7-6e483715c111", 00:20:43.974 "is_configured": true, 00:20:43.974 "data_offset": 2048, 00:20:43.974 "data_size": 63488 00:20:43.974 } 00:20:43.974 ] 00:20:43.974 }' 00:20:43.974 21:17:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:43.974 21:17:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:43.974 21:17:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:43.974 21:17:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:43.974 21:17:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:44.908 [2024-06-07 21:17:07.307003] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:44.908 [2024-06-07 21:17:07.307096] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:44.908 [2024-06-07 21:17:07.307310] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.166 21:17:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:45.166 21:17:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:45.166 21:17:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:45.166 21:17:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:45.166 21:17:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:45.166 21:17:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:45.166 21:17:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.166 21:17:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.425 21:17:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:45.425 "name": "raid_bdev1", 00:20:45.425 "uuid": "440cd4cb-4c58-415c-b6ec-9e4298dc10f3", 00:20:45.425 "strip_size_kb": 0, 00:20:45.425 "state": "online", 00:20:45.425 "raid_level": "raid1", 00:20:45.425 "superblock": true, 00:20:45.425 "num_base_bdevs": 4, 00:20:45.425 "num_base_bdevs_discovered": 3, 00:20:45.425 "num_base_bdevs_operational": 3, 00:20:45.425 "base_bdevs_list": [ 00:20:45.425 { 00:20:45.425 "name": "spare", 00:20:45.425 "uuid": "a01129fa-d230-57c7-9ac4-e82e26225e7e", 00:20:45.425 "is_configured": true, 00:20:45.425 "data_offset": 2048, 00:20:45.425 "data_size": 63488 00:20:45.425 }, 00:20:45.425 { 00:20:45.425 "name": null, 00:20:45.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.425 "is_configured": false, 00:20:45.425 "data_offset": 2048, 00:20:45.425 "data_size": 63488 00:20:45.425 }, 00:20:45.425 { 00:20:45.425 "name": "BaseBdev3", 00:20:45.425 "uuid": "1300d842-f431-5fcd-8bf7-e42ad90e363a", 00:20:45.425 "is_configured": true, 00:20:45.425 "data_offset": 2048, 00:20:45.425 "data_size": 63488 00:20:45.425 }, 00:20:45.425 { 00:20:45.425 "name": "BaseBdev4", 00:20:45.425 "uuid": "b6343ff9-521c-5dce-b2e7-6e483715c111", 00:20:45.425 "is_configured": true, 00:20:45.425 "data_offset": 2048, 00:20:45.425 "data_size": 63488 00:20:45.425 } 00:20:45.425 ] 00:20:45.425 }' 00:20:45.425 21:17:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:45.425 21:17:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:45.425 21:17:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:45.425 21:17:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:45.425 21:17:08 -- bdev/bdev_raid.sh@660 -- # break 00:20:45.425 21:17:08 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:45.425 21:17:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:45.425 21:17:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:45.425 21:17:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:45.425 21:17:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:45.425 21:17:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.425 21:17:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.684 21:17:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:45.684 "name": "raid_bdev1", 00:20:45.684 "uuid": "440cd4cb-4c58-415c-b6ec-9e4298dc10f3", 00:20:45.684 "strip_size_kb": 0, 00:20:45.684 "state": "online", 00:20:45.684 "raid_level": "raid1", 00:20:45.684 "superblock": true, 00:20:45.684 "num_base_bdevs": 4, 00:20:45.684 "num_base_bdevs_discovered": 3, 00:20:45.684 "num_base_bdevs_operational": 3, 00:20:45.684 "base_bdevs_list": [ 00:20:45.684 { 00:20:45.684 "name": "spare", 00:20:45.684 "uuid": "a01129fa-d230-57c7-9ac4-e82e26225e7e", 00:20:45.684 "is_configured": true, 00:20:45.684 "data_offset": 2048, 00:20:45.684 "data_size": 63488 00:20:45.684 }, 00:20:45.684 { 00:20:45.684 "name": null, 00:20:45.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.684 "is_configured": false, 00:20:45.684 "data_offset": 2048, 00:20:45.684 "data_size": 63488 00:20:45.684 }, 00:20:45.684 { 00:20:45.684 "name": "BaseBdev3", 00:20:45.684 "uuid": "1300d842-f431-5fcd-8bf7-e42ad90e363a", 00:20:45.684 "is_configured": true, 00:20:45.684 "data_offset": 2048, 00:20:45.684 "data_size": 63488 00:20:45.684 }, 00:20:45.684 { 00:20:45.684 "name": "BaseBdev4", 00:20:45.684 "uuid": "b6343ff9-521c-5dce-b2e7-6e483715c111", 00:20:45.684 "is_configured": true, 00:20:45.684 "data_offset": 2048, 00:20:45.684 "data_size": 63488 00:20:45.684 } 00:20:45.684 ] 00:20:45.684 }' 00:20:45.684 21:17:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:45.684 21:17:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:45.943 21:17:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:45.943 21:17:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:45.943 21:17:08 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:45.943 21:17:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:45.943 21:17:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:45.943 21:17:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:45.943 21:17:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:45.943 21:17:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:45.943 21:17:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:45.943 21:17:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:45.943 21:17:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:45.943 21:17:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:45.943 21:17:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.943 21:17:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.202 21:17:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:46.202 "name": "raid_bdev1", 00:20:46.202 "uuid": "440cd4cb-4c58-415c-b6ec-9e4298dc10f3", 00:20:46.202 "strip_size_kb": 0, 00:20:46.202 "state": "online", 00:20:46.202 "raid_level": "raid1", 00:20:46.202 "superblock": true, 00:20:46.202 "num_base_bdevs": 4, 00:20:46.202 "num_base_bdevs_discovered": 3, 00:20:46.202 "num_base_bdevs_operational": 3, 00:20:46.202 "base_bdevs_list": [ 00:20:46.202 { 00:20:46.202 "name": "spare", 00:20:46.202 "uuid": "a01129fa-d230-57c7-9ac4-e82e26225e7e", 00:20:46.202 "is_configured": true, 00:20:46.202 "data_offset": 2048, 00:20:46.202 "data_size": 63488 00:20:46.202 }, 00:20:46.202 { 00:20:46.202 "name": null, 00:20:46.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.202 "is_configured": false, 00:20:46.202 "data_offset": 2048, 00:20:46.202 "data_size": 63488 00:20:46.202 }, 00:20:46.202 { 00:20:46.202 "name": "BaseBdev3", 00:20:46.202 "uuid": "1300d842-f431-5fcd-8bf7-e42ad90e363a", 00:20:46.202 "is_configured": true, 00:20:46.202 "data_offset": 2048, 00:20:46.202 "data_size": 63488 00:20:46.202 }, 00:20:46.202 { 00:20:46.202 "name": "BaseBdev4", 00:20:46.202 "uuid": "b6343ff9-521c-5dce-b2e7-6e483715c111", 00:20:46.202 "is_configured": true, 00:20:46.202 "data_offset": 2048, 00:20:46.202 "data_size": 63488 00:20:46.202 } 00:20:46.202 ] 00:20:46.202 }' 00:20:46.202 21:17:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:46.202 21:17:08 -- common/autotest_common.sh@10 -- # set +x 00:20:46.769 21:17:09 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:47.055 [2024-06-07 21:17:09.620552] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:47.055 [2024-06-07 21:17:09.620614] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:47.055 [2024-06-07 21:17:09.620780] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:47.055 [2024-06-07 21:17:09.620893] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:47.055 [2024-06-07 21:17:09.620915] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:20:47.055 21:17:09 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.055 21:17:09 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:47.313 21:17:09 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:47.313 21:17:09 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:47.313 21:17:09 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:47.313 21:17:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:47.313 21:17:09 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:47.313 21:17:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:47.313 21:17:09 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:47.313 21:17:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:47.313 21:17:09 -- bdev/nbd_common.sh@12 -- # local i 00:20:47.313 21:17:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:47.313 21:17:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:47.313 21:17:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:47.571 /dev/nbd0 00:20:47.571 21:17:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:47.571 21:17:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:47.571 21:17:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:47.571 21:17:10 -- common/autotest_common.sh@857 -- # local i 00:20:47.571 21:17:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:47.571 21:17:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:47.571 21:17:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:47.571 21:17:10 -- common/autotest_common.sh@861 -- # break 00:20:47.571 21:17:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:47.571 21:17:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:47.571 21:17:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:47.571 1+0 records in 00:20:47.571 1+0 records out 00:20:47.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546686 s, 7.5 MB/s 00:20:47.571 21:17:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.571 21:17:10 -- common/autotest_common.sh@874 -- # size=4096 00:20:47.571 21:17:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.571 21:17:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:47.571 21:17:10 -- common/autotest_common.sh@877 -- # return 0 00:20:47.571 21:17:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:47.571 21:17:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:47.571 21:17:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:47.829 /dev/nbd1 00:20:47.829 21:17:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:47.829 21:17:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:47.829 21:17:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:47.829 21:17:10 -- common/autotest_common.sh@857 -- # local i 00:20:47.829 21:17:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:47.830 21:17:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:47.830 21:17:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:47.830 21:17:10 -- common/autotest_common.sh@861 -- # break 00:20:47.830 21:17:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:47.830 21:17:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:47.830 21:17:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:47.830 1+0 records in 00:20:47.830 1+0 records out 00:20:47.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642378 s, 6.4 MB/s 00:20:47.830 21:17:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.830 21:17:10 -- common/autotest_common.sh@874 -- # size=4096 00:20:47.830 21:17:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.830 21:17:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:47.830 21:17:10 -- common/autotest_common.sh@877 -- # return 0 00:20:47.830 21:17:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:47.830 21:17:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:47.830 21:17:10 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:47.830 21:17:10 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:47.830 21:17:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:47.830 21:17:10 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:47.830 21:17:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:47.830 21:17:10 -- bdev/nbd_common.sh@51 -- # local i 00:20:47.830 21:17:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:47.830 21:17:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:48.156 21:17:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:48.156 21:17:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:48.156 21:17:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:48.156 21:17:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:48.156 21:17:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:48.156 21:17:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:48.156 21:17:10 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:48.156 21:17:10 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:48.156 21:17:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:48.156 21:17:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:48.156 21:17:10 -- bdev/nbd_common.sh@41 -- # break 00:20:48.156 21:17:10 -- bdev/nbd_common.sh@45 -- # return 0 00:20:48.156 21:17:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:48.156 21:17:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:48.414 21:17:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:48.414 21:17:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:48.414 21:17:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:48.414 21:17:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:48.414 21:17:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:48.414 21:17:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:48.414 21:17:11 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:48.672 21:17:11 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:48.672 21:17:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:48.672 21:17:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:48.672 21:17:11 -- bdev/nbd_common.sh@41 -- # break 00:20:48.672 21:17:11 -- bdev/nbd_common.sh@45 -- # return 0 00:20:48.672 21:17:11 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:48.672 21:17:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:48.672 21:17:11 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:48.672 21:17:11 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:48.672 21:17:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:48.930 [2024-06-07 21:17:11.582605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:48.930 [2024-06-07 21:17:11.582711] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.930 [2024-06-07 21:17:11.582756] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:20:48.930 [2024-06-07 21:17:11.582779] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.930 [2024-06-07 21:17:11.585242] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.930 [2024-06-07 21:17:11.585333] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:48.930 [2024-06-07 21:17:11.585458] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:48.930 [2024-06-07 21:17:11.585521] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:48.930 BaseBdev1 00:20:49.188 21:17:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:49.188 21:17:11 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:20:49.188 21:17:11 -- bdev/bdev_raid.sh@696 -- # continue 00:20:49.188 21:17:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:49.188 21:17:11 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:20:49.188 21:17:11 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:20:49.188 21:17:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:49.447 [2024-06-07 21:17:12.034680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:49.447 [2024-06-07 21:17:12.034820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.447 [2024-06-07 21:17:12.034865] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:49.447 [2024-06-07 21:17:12.034886] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.447 [2024-06-07 21:17:12.035384] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.447 [2024-06-07 21:17:12.035438] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:49.447 [2024-06-07 21:17:12.035533] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:20:49.447 [2024-06-07 21:17:12.035547] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:20:49.447 [2024-06-07 21:17:12.035554] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:49.447 [2024-06-07 21:17:12.035596] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:20:49.447 [2024-06-07 21:17:12.035652] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:49.447 BaseBdev3 00:20:49.447 21:17:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:49.447 21:17:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:20:49.447 21:17:12 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:20:49.705 21:17:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:49.964 [2024-06-07 21:17:12.442759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:49.964 [2024-06-07 21:17:12.442868] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.964 [2024-06-07 21:17:12.442924] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:20:49.964 [2024-06-07 21:17:12.442953] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.964 [2024-06-07 21:17:12.443429] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.964 [2024-06-07 21:17:12.443481] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:49.964 [2024-06-07 21:17:12.443562] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:20:49.964 [2024-06-07 21:17:12.443605] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:49.964 BaseBdev4 00:20:49.964 21:17:12 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:50.223 21:17:12 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:50.223 [2024-06-07 21:17:12.874833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:50.223 [2024-06-07 21:17:12.874932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.223 [2024-06-07 21:17:12.874970] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:20:50.223 [2024-06-07 21:17:12.875009] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.223 [2024-06-07 21:17:12.875514] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.223 [2024-06-07 21:17:12.875571] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:50.223 [2024-06-07 21:17:12.875682] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:50.223 [2024-06-07 21:17:12.875725] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:50.223 spare 00:20:50.223 21:17:12 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:50.223 21:17:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:50.223 21:17:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:50.223 21:17:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:50.223 21:17:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:50.223 21:17:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:50.223 21:17:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:50.223 21:17:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:50.223 21:17:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:50.223 21:17:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:50.223 21:17:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.223 21:17:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.489 [2024-06-07 21:17:12.975856] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:20:50.489 [2024-06-07 21:17:12.975884] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:50.489 [2024-06-07 21:17:12.976058] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5f20 00:20:50.489 [2024-06-07 21:17:12.976557] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:20:50.489 [2024-06-07 21:17:12.976582] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:20:50.489 [2024-06-07 21:17:12.976730] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.489 21:17:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:50.489 "name": "raid_bdev1", 00:20:50.489 "uuid": "440cd4cb-4c58-415c-b6ec-9e4298dc10f3", 00:20:50.489 "strip_size_kb": 0, 00:20:50.489 "state": "online", 00:20:50.489 "raid_level": "raid1", 00:20:50.489 "superblock": true, 00:20:50.489 "num_base_bdevs": 4, 00:20:50.489 "num_base_bdevs_discovered": 3, 00:20:50.489 "num_base_bdevs_operational": 3, 00:20:50.489 "base_bdevs_list": [ 00:20:50.489 { 00:20:50.489 "name": "spare", 00:20:50.489 "uuid": "a01129fa-d230-57c7-9ac4-e82e26225e7e", 00:20:50.489 "is_configured": true, 00:20:50.489 "data_offset": 2048, 00:20:50.489 "data_size": 63488 00:20:50.489 }, 00:20:50.489 { 00:20:50.489 "name": null, 00:20:50.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.490 "is_configured": false, 00:20:50.490 "data_offset": 2048, 00:20:50.490 "data_size": 63488 00:20:50.490 }, 00:20:50.490 { 00:20:50.490 "name": "BaseBdev3", 00:20:50.490 "uuid": "1300d842-f431-5fcd-8bf7-e42ad90e363a", 00:20:50.490 "is_configured": true, 00:20:50.490 "data_offset": 2048, 00:20:50.490 "data_size": 63488 00:20:50.490 }, 00:20:50.490 { 00:20:50.490 "name": "BaseBdev4", 00:20:50.490 "uuid": "b6343ff9-521c-5dce-b2e7-6e483715c111", 00:20:50.490 "is_configured": true, 00:20:50.490 "data_offset": 2048, 00:20:50.490 "data_size": 63488 00:20:50.490 } 00:20:50.490 ] 00:20:50.490 }' 00:20:50.490 21:17:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:50.490 21:17:13 -- common/autotest_common.sh@10 -- # set +x 00:20:51.425 21:17:13 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.425 21:17:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:51.425 21:17:13 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:51.425 21:17:13 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:51.425 21:17:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:51.425 21:17:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.425 21:17:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.425 21:17:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:51.425 "name": "raid_bdev1", 00:20:51.425 "uuid": "440cd4cb-4c58-415c-b6ec-9e4298dc10f3", 00:20:51.425 "strip_size_kb": 0, 00:20:51.425 "state": "online", 00:20:51.425 "raid_level": "raid1", 00:20:51.425 "superblock": true, 00:20:51.425 "num_base_bdevs": 4, 00:20:51.425 "num_base_bdevs_discovered": 3, 00:20:51.425 "num_base_bdevs_operational": 3, 00:20:51.425 "base_bdevs_list": [ 00:20:51.425 { 00:20:51.425 "name": "spare", 00:20:51.425 "uuid": "a01129fa-d230-57c7-9ac4-e82e26225e7e", 00:20:51.425 "is_configured": true, 00:20:51.425 "data_offset": 2048, 00:20:51.425 "data_size": 63488 00:20:51.425 }, 00:20:51.425 { 00:20:51.425 "name": null, 00:20:51.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.425 "is_configured": false, 00:20:51.425 "data_offset": 2048, 00:20:51.425 "data_size": 63488 00:20:51.425 }, 00:20:51.425 { 00:20:51.425 "name": "BaseBdev3", 00:20:51.426 "uuid": "1300d842-f431-5fcd-8bf7-e42ad90e363a", 00:20:51.426 "is_configured": true, 00:20:51.426 "data_offset": 2048, 00:20:51.426 "data_size": 63488 00:20:51.426 }, 00:20:51.426 { 00:20:51.426 "name": "BaseBdev4", 00:20:51.426 "uuid": "b6343ff9-521c-5dce-b2e7-6e483715c111", 00:20:51.426 "is_configured": true, 00:20:51.426 "data_offset": 2048, 00:20:51.426 "data_size": 63488 00:20:51.426 } 00:20:51.426 ] 00:20:51.426 }' 00:20:51.426 21:17:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:51.426 21:17:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:51.426 21:17:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:51.426 21:17:14 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:51.426 21:17:14 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.426 21:17:14 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:51.684 21:17:14 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:51.684 21:17:14 -- bdev/bdev_raid.sh@709 -- # killprocess 138809 00:20:51.684 21:17:14 -- common/autotest_common.sh@926 -- # '[' -z 138809 ']' 00:20:51.684 21:17:14 -- common/autotest_common.sh@930 -- # kill -0 138809 00:20:51.684 21:17:14 -- common/autotest_common.sh@931 -- # uname 00:20:51.684 21:17:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:51.684 21:17:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138809 00:20:51.684 killing process with pid 138809 00:20:51.684 21:17:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:51.684 21:17:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:51.684 21:17:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138809' 00:20:51.684 21:17:14 -- common/autotest_common.sh@945 -- # kill 138809 00:20:51.684 21:17:14 -- common/autotest_common.sh@950 -- # wait 138809 00:20:51.685 Received shutdown signal, test time was about 60.000000 seconds 00:20:51.685 00:20:51.685 Latency(us) 00:20:51.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.685 =================================================================================================================== 00:20:51.685 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:51.685 [2024-06-07 21:17:14.353557] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:51.685 [2024-06-07 21:17:14.353705] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:51.685 [2024-06-07 21:17:14.353805] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:51.685 [2024-06-07 21:17:14.353827] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:20:51.943 [2024-06-07 21:17:14.399154] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:52.203 ************************************ 00:20:52.203 END TEST raid_rebuild_test_sb 00:20:52.203 ************************************ 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:52.203 00:20:52.203 real 0m27.731s 00:20:52.203 user 0m40.733s 00:20:52.203 sys 0m4.607s 00:20:52.203 21:17:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:52.203 21:17:14 -- common/autotest_common.sh@10 -- # set +x 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:20:52.203 21:17:14 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:52.203 21:17:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:52.203 21:17:14 -- common/autotest_common.sh@10 -- # set +x 00:20:52.203 ************************************ 00:20:52.203 START TEST raid_rebuild_test_io 00:20:52.203 ************************************ 00:20:52.203 21:17:14 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@544 -- # raid_pid=139525 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@545 -- # waitforlisten 139525 /var/tmp/spdk-raid.sock 00:20:52.203 21:17:14 -- common/autotest_common.sh@819 -- # '[' -z 139525 ']' 00:20:52.203 21:17:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:52.203 21:17:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:52.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:52.203 21:17:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:52.203 21:17:14 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:52.203 21:17:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:52.203 21:17:14 -- common/autotest_common.sh@10 -- # set +x 00:20:52.203 [2024-06-07 21:17:14.790138] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:52.203 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:52.203 Zero copy mechanism will not be used. 00:20:52.203 [2024-06-07 21:17:14.790392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139525 ] 00:20:52.463 [2024-06-07 21:17:14.956668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.463 [2024-06-07 21:17:15.041482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.463 [2024-06-07 21:17:15.092638] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:53.399 21:17:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:53.399 21:17:15 -- common/autotest_common.sh@852 -- # return 0 00:20:53.399 21:17:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:53.399 21:17:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:53.399 21:17:15 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:53.399 BaseBdev1 00:20:53.399 21:17:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:53.399 21:17:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:53.399 21:17:15 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:53.658 BaseBdev2 00:20:53.658 21:17:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:53.658 21:17:16 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:53.658 21:17:16 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:53.917 BaseBdev3 00:20:53.917 21:17:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:53.917 21:17:16 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:53.917 21:17:16 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:54.174 BaseBdev4 00:20:54.174 21:17:16 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:54.432 spare_malloc 00:20:54.432 21:17:16 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:54.432 spare_delay 00:20:54.432 21:17:17 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:54.690 [2024-06-07 21:17:17.323220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:54.690 [2024-06-07 21:17:17.323402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.690 [2024-06-07 21:17:17.323453] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:54.690 [2024-06-07 21:17:17.323510] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.690 [2024-06-07 21:17:17.326286] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.690 [2024-06-07 21:17:17.326338] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:54.690 spare 00:20:54.690 21:17:17 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:54.949 [2024-06-07 21:17:17.527318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.949 [2024-06-07 21:17:17.529559] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:54.949 [2024-06-07 21:17:17.529616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:54.949 [2024-06-07 21:17:17.529654] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:54.949 [2024-06-07 21:17:17.529747] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:20:54.949 [2024-06-07 21:17:17.529760] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:54.949 [2024-06-07 21:17:17.530023] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:54.949 [2024-06-07 21:17:17.530536] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:20:54.949 [2024-06-07 21:17:17.530559] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:20:54.949 [2024-06-07 21:17:17.530763] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.949 21:17:17 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:54.949 21:17:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:54.949 21:17:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:54.949 21:17:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:54.949 21:17:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:54.949 21:17:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:54.949 21:17:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:54.949 21:17:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:54.949 21:17:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:54.949 21:17:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:54.949 21:17:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.949 21:17:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.208 21:17:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:55.208 "name": "raid_bdev1", 00:20:55.208 "uuid": "cf0c62f6-84cd-4dfb-b8cd-7d4b12fb2ada", 00:20:55.208 "strip_size_kb": 0, 00:20:55.208 "state": "online", 00:20:55.208 "raid_level": "raid1", 00:20:55.208 "superblock": false, 00:20:55.208 "num_base_bdevs": 4, 00:20:55.208 "num_base_bdevs_discovered": 4, 00:20:55.208 "num_base_bdevs_operational": 4, 00:20:55.208 "base_bdevs_list": [ 00:20:55.208 { 00:20:55.208 "name": "BaseBdev1", 00:20:55.208 "uuid": "177dd9ac-3435-4f95-8e32-23080c5f4125", 00:20:55.208 "is_configured": true, 00:20:55.208 "data_offset": 0, 00:20:55.208 "data_size": 65536 00:20:55.208 }, 00:20:55.208 { 00:20:55.208 "name": "BaseBdev2", 00:20:55.208 "uuid": "7f65e827-9c7c-4eae-8391-f31e611292fe", 00:20:55.208 "is_configured": true, 00:20:55.208 "data_offset": 0, 00:20:55.208 "data_size": 65536 00:20:55.208 }, 00:20:55.208 { 00:20:55.208 "name": "BaseBdev3", 00:20:55.208 "uuid": "aeaed14b-daea-4dfa-a16d-ac5f574e5ded", 00:20:55.208 "is_configured": true, 00:20:55.208 "data_offset": 0, 00:20:55.208 "data_size": 65536 00:20:55.208 }, 00:20:55.208 { 00:20:55.208 "name": "BaseBdev4", 00:20:55.208 "uuid": "e10d689b-f232-4cd0-8b1f-276b2bfd1f17", 00:20:55.208 "is_configured": true, 00:20:55.208 "data_offset": 0, 00:20:55.208 "data_size": 65536 00:20:55.208 } 00:20:55.208 ] 00:20:55.208 }' 00:20:55.208 21:17:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:55.208 21:17:17 -- common/autotest_common.sh@10 -- # set +x 00:20:55.777 21:17:18 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:55.777 21:17:18 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:56.037 [2024-06-07 21:17:18.647922] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:56.037 21:17:18 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:56.037 21:17:18 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.037 21:17:18 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:56.296 21:17:18 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:56.296 21:17:18 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:56.296 21:17:18 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:56.296 21:17:18 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:56.556 [2024-06-07 21:17:19.042053] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:56.556 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:56.556 Zero copy mechanism will not be used. 00:20:56.556 Running I/O for 60 seconds... 00:20:56.556 [2024-06-07 21:17:19.126003] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:56.556 [2024-06-07 21:17:19.132650] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:20:56.556 21:17:19 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:56.556 21:17:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:56.556 21:17:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:56.556 21:17:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:56.556 21:17:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:56.556 21:17:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:56.556 21:17:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:56.556 21:17:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:56.556 21:17:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:56.556 21:17:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:56.556 21:17:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.556 21:17:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.815 21:17:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:56.815 "name": "raid_bdev1", 00:20:56.815 "uuid": "cf0c62f6-84cd-4dfb-b8cd-7d4b12fb2ada", 00:20:56.815 "strip_size_kb": 0, 00:20:56.815 "state": "online", 00:20:56.815 "raid_level": "raid1", 00:20:56.815 "superblock": false, 00:20:56.815 "num_base_bdevs": 4, 00:20:56.815 "num_base_bdevs_discovered": 3, 00:20:56.815 "num_base_bdevs_operational": 3, 00:20:56.815 "base_bdevs_list": [ 00:20:56.815 { 00:20:56.815 "name": null, 00:20:56.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.815 "is_configured": false, 00:20:56.815 "data_offset": 0, 00:20:56.815 "data_size": 65536 00:20:56.815 }, 00:20:56.815 { 00:20:56.815 "name": "BaseBdev2", 00:20:56.815 "uuid": "7f65e827-9c7c-4eae-8391-f31e611292fe", 00:20:56.815 "is_configured": true, 00:20:56.815 "data_offset": 0, 00:20:56.815 "data_size": 65536 00:20:56.815 }, 00:20:56.815 { 00:20:56.815 "name": "BaseBdev3", 00:20:56.815 "uuid": "aeaed14b-daea-4dfa-a16d-ac5f574e5ded", 00:20:56.815 "is_configured": true, 00:20:56.815 "data_offset": 0, 00:20:56.815 "data_size": 65536 00:20:56.815 }, 00:20:56.815 { 00:20:56.815 "name": "BaseBdev4", 00:20:56.815 "uuid": "e10d689b-f232-4cd0-8b1f-276b2bfd1f17", 00:20:56.815 "is_configured": true, 00:20:56.815 "data_offset": 0, 00:20:56.815 "data_size": 65536 00:20:56.815 } 00:20:56.815 ] 00:20:56.815 }' 00:20:56.815 21:17:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:56.815 21:17:19 -- common/autotest_common.sh@10 -- # set +x 00:20:57.750 21:17:20 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:57.750 [2024-06-07 21:17:20.320286] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:57.750 [2024-06-07 21:17:20.320771] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:57.750 21:17:20 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:57.750 [2024-06-07 21:17:20.409598] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:57.750 [2024-06-07 21:17:20.412647] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:58.009 [2024-06-07 21:17:20.540624] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:58.009 [2024-06-07 21:17:20.542545] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:58.268 [2024-06-07 21:17:20.760371] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:58.268 [2024-06-07 21:17:20.761566] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:58.526 [2024-06-07 21:17:21.106852] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:58.785 [2024-06-07 21:17:21.217767] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:58.785 21:17:21 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.785 21:17:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:58.785 21:17:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:58.785 21:17:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:58.785 21:17:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:58.785 21:17:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.785 21:17:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.044 [2024-06-07 21:17:21.562698] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:59.044 [2024-06-07 21:17:21.564645] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:59.044 21:17:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:59.044 "name": "raid_bdev1", 00:20:59.044 "uuid": "cf0c62f6-84cd-4dfb-b8cd-7d4b12fb2ada", 00:20:59.044 "strip_size_kb": 0, 00:20:59.044 "state": "online", 00:20:59.044 "raid_level": "raid1", 00:20:59.044 "superblock": false, 00:20:59.044 "num_base_bdevs": 4, 00:20:59.044 "num_base_bdevs_discovered": 4, 00:20:59.044 "num_base_bdevs_operational": 4, 00:20:59.044 "process": { 00:20:59.044 "type": "rebuild", 00:20:59.044 "target": "spare", 00:20:59.044 "progress": { 00:20:59.044 "blocks": 14336, 00:20:59.044 "percent": 21 00:20:59.044 } 00:20:59.044 }, 00:20:59.044 "base_bdevs_list": [ 00:20:59.044 { 00:20:59.044 "name": "spare", 00:20:59.044 "uuid": "498697fc-9be0-55fd-b18a-cfecaad1acff", 00:20:59.044 "is_configured": true, 00:20:59.044 "data_offset": 0, 00:20:59.044 "data_size": 65536 00:20:59.044 }, 00:20:59.044 { 00:20:59.044 "name": "BaseBdev2", 00:20:59.044 "uuid": "7f65e827-9c7c-4eae-8391-f31e611292fe", 00:20:59.044 "is_configured": true, 00:20:59.044 "data_offset": 0, 00:20:59.044 "data_size": 65536 00:20:59.044 }, 00:20:59.044 { 00:20:59.044 "name": "BaseBdev3", 00:20:59.044 "uuid": "aeaed14b-daea-4dfa-a16d-ac5f574e5ded", 00:20:59.044 "is_configured": true, 00:20:59.044 "data_offset": 0, 00:20:59.044 "data_size": 65536 00:20:59.044 }, 00:20:59.044 { 00:20:59.044 "name": "BaseBdev4", 00:20:59.044 "uuid": "e10d689b-f232-4cd0-8b1f-276b2bfd1f17", 00:20:59.044 "is_configured": true, 00:20:59.044 "data_offset": 0, 00:20:59.044 "data_size": 65536 00:20:59.044 } 00:20:59.044 ] 00:20:59.044 }' 00:20:59.044 21:17:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:59.044 21:17:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.044 21:17:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:59.335 21:17:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.335 21:17:21 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:59.335 [2024-06-07 21:17:21.973299] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:59.597 [2024-06-07 21:17:22.056348] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:59.597 [2024-06-07 21:17:22.058048] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:59.597 [2024-06-07 21:17:22.166367] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:59.597 [2024-06-07 21:17:22.175928] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.597 [2024-06-07 21:17:22.184961] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:20:59.597 21:17:22 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:59.597 21:17:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:59.597 21:17:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:59.597 21:17:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:59.597 21:17:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:59.597 21:17:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:59.597 21:17:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:59.597 21:17:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:59.597 21:17:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:59.597 21:17:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:59.597 21:17:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.597 21:17:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.854 21:17:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:59.854 "name": "raid_bdev1", 00:20:59.854 "uuid": "cf0c62f6-84cd-4dfb-b8cd-7d4b12fb2ada", 00:20:59.854 "strip_size_kb": 0, 00:20:59.854 "state": "online", 00:20:59.854 "raid_level": "raid1", 00:20:59.854 "superblock": false, 00:20:59.854 "num_base_bdevs": 4, 00:20:59.854 "num_base_bdevs_discovered": 3, 00:20:59.854 "num_base_bdevs_operational": 3, 00:20:59.854 "base_bdevs_list": [ 00:20:59.854 { 00:20:59.854 "name": null, 00:20:59.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.854 "is_configured": false, 00:20:59.854 "data_offset": 0, 00:20:59.854 "data_size": 65536 00:20:59.854 }, 00:20:59.854 { 00:20:59.854 "name": "BaseBdev2", 00:20:59.854 "uuid": "7f65e827-9c7c-4eae-8391-f31e611292fe", 00:20:59.854 "is_configured": true, 00:20:59.854 "data_offset": 0, 00:20:59.854 "data_size": 65536 00:20:59.854 }, 00:20:59.854 { 00:20:59.854 "name": "BaseBdev3", 00:20:59.854 "uuid": "aeaed14b-daea-4dfa-a16d-ac5f574e5ded", 00:20:59.854 "is_configured": true, 00:20:59.854 "data_offset": 0, 00:20:59.854 "data_size": 65536 00:20:59.854 }, 00:20:59.854 { 00:20:59.854 "name": "BaseBdev4", 00:20:59.854 "uuid": "e10d689b-f232-4cd0-8b1f-276b2bfd1f17", 00:20:59.854 "is_configured": true, 00:20:59.854 "data_offset": 0, 00:20:59.854 "data_size": 65536 00:20:59.854 } 00:20:59.854 ] 00:20:59.854 }' 00:20:59.854 21:17:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:59.854 21:17:22 -- common/autotest_common.sh@10 -- # set +x 00:21:00.789 21:17:23 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:00.789 21:17:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:00.789 21:17:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:00.789 21:17:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:00.789 21:17:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:00.789 21:17:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.789 21:17:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.789 21:17:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:00.789 "name": "raid_bdev1", 00:21:00.789 "uuid": "cf0c62f6-84cd-4dfb-b8cd-7d4b12fb2ada", 00:21:00.790 "strip_size_kb": 0, 00:21:00.790 "state": "online", 00:21:00.790 "raid_level": "raid1", 00:21:00.790 "superblock": false, 00:21:00.790 "num_base_bdevs": 4, 00:21:00.790 "num_base_bdevs_discovered": 3, 00:21:00.790 "num_base_bdevs_operational": 3, 00:21:00.790 "base_bdevs_list": [ 00:21:00.790 { 00:21:00.790 "name": null, 00:21:00.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.790 "is_configured": false, 00:21:00.790 "data_offset": 0, 00:21:00.790 "data_size": 65536 00:21:00.790 }, 00:21:00.790 { 00:21:00.790 "name": "BaseBdev2", 00:21:00.790 "uuid": "7f65e827-9c7c-4eae-8391-f31e611292fe", 00:21:00.790 "is_configured": true, 00:21:00.790 "data_offset": 0, 00:21:00.790 "data_size": 65536 00:21:00.790 }, 00:21:00.790 { 00:21:00.790 "name": "BaseBdev3", 00:21:00.790 "uuid": "aeaed14b-daea-4dfa-a16d-ac5f574e5ded", 00:21:00.790 "is_configured": true, 00:21:00.790 "data_offset": 0, 00:21:00.790 "data_size": 65536 00:21:00.790 }, 00:21:00.790 { 00:21:00.790 "name": "BaseBdev4", 00:21:00.790 "uuid": "e10d689b-f232-4cd0-8b1f-276b2bfd1f17", 00:21:00.790 "is_configured": true, 00:21:00.790 "data_offset": 0, 00:21:00.790 "data_size": 65536 00:21:00.790 } 00:21:00.790 ] 00:21:00.790 }' 00:21:00.790 21:17:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:00.790 21:17:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:00.790 21:17:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:01.049 21:17:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:01.049 21:17:23 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:01.049 [2024-06-07 21:17:23.659456] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:01.049 [2024-06-07 21:17:23.659758] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:01.049 21:17:23 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:01.049 [2024-06-07 21:17:23.722445] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:01.307 [2024-06-07 21:17:23.725125] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:01.307 [2024-06-07 21:17:23.848666] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:01.307 [2024-06-07 21:17:23.849964] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:01.565 [2024-06-07 21:17:24.051861] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:01.565 [2024-06-07 21:17:24.052510] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:01.824 [2024-06-07 21:17:24.399098] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:02.083 [2024-06-07 21:17:24.517438] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:02.083 [2024-06-07 21:17:24.518363] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:02.083 21:17:24 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.083 21:17:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:02.083 21:17:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:02.083 21:17:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:02.083 21:17:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:02.083 21:17:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.083 21:17:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.342 [2024-06-07 21:17:24.861665] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:02.342 21:17:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:02.342 "name": "raid_bdev1", 00:21:02.342 "uuid": "cf0c62f6-84cd-4dfb-b8cd-7d4b12fb2ada", 00:21:02.342 "strip_size_kb": 0, 00:21:02.342 "state": "online", 00:21:02.342 "raid_level": "raid1", 00:21:02.342 "superblock": false, 00:21:02.342 "num_base_bdevs": 4, 00:21:02.342 "num_base_bdevs_discovered": 4, 00:21:02.342 "num_base_bdevs_operational": 4, 00:21:02.342 "process": { 00:21:02.342 "type": "rebuild", 00:21:02.342 "target": "spare", 00:21:02.342 "progress": { 00:21:02.342 "blocks": 14336, 00:21:02.342 "percent": 21 00:21:02.342 } 00:21:02.342 }, 00:21:02.342 "base_bdevs_list": [ 00:21:02.342 { 00:21:02.342 "name": "spare", 00:21:02.342 "uuid": "498697fc-9be0-55fd-b18a-cfecaad1acff", 00:21:02.342 "is_configured": true, 00:21:02.342 "data_offset": 0, 00:21:02.342 "data_size": 65536 00:21:02.342 }, 00:21:02.342 { 00:21:02.342 "name": "BaseBdev2", 00:21:02.342 "uuid": "7f65e827-9c7c-4eae-8391-f31e611292fe", 00:21:02.342 "is_configured": true, 00:21:02.342 "data_offset": 0, 00:21:02.342 "data_size": 65536 00:21:02.342 }, 00:21:02.342 { 00:21:02.342 "name": "BaseBdev3", 00:21:02.342 "uuid": "aeaed14b-daea-4dfa-a16d-ac5f574e5ded", 00:21:02.342 "is_configured": true, 00:21:02.342 "data_offset": 0, 00:21:02.342 "data_size": 65536 00:21:02.342 }, 00:21:02.342 { 00:21:02.342 "name": "BaseBdev4", 00:21:02.342 "uuid": "e10d689b-f232-4cd0-8b1f-276b2bfd1f17", 00:21:02.342 "is_configured": true, 00:21:02.342 "data_offset": 0, 00:21:02.342 "data_size": 65536 00:21:02.342 } 00:21:02.342 ] 00:21:02.342 }' 00:21:02.342 21:17:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:02.342 21:17:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:02.342 21:17:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:02.600 21:17:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.600 21:17:25 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:02.600 21:17:25 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:02.600 21:17:25 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:02.600 21:17:25 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:02.600 21:17:25 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:02.600 [2024-06-07 21:17:25.100428] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:02.859 [2024-06-07 21:17:25.282682] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:02.859 [2024-06-07 21:17:25.423834] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005930 00:21:02.859 [2024-06-07 21:17:25.424217] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ba0 00:21:02.859 21:17:25 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:02.859 21:17:25 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:02.859 21:17:25 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.859 21:17:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:02.859 21:17:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:02.859 21:17:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:02.859 21:17:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:02.859 21:17:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.859 21:17:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.119 21:17:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:03.119 "name": "raid_bdev1", 00:21:03.119 "uuid": "cf0c62f6-84cd-4dfb-b8cd-7d4b12fb2ada", 00:21:03.119 "strip_size_kb": 0, 00:21:03.119 "state": "online", 00:21:03.119 "raid_level": "raid1", 00:21:03.119 "superblock": false, 00:21:03.119 "num_base_bdevs": 4, 00:21:03.119 "num_base_bdevs_discovered": 3, 00:21:03.119 "num_base_bdevs_operational": 3, 00:21:03.119 "process": { 00:21:03.119 "type": "rebuild", 00:21:03.119 "target": "spare", 00:21:03.119 "progress": { 00:21:03.119 "blocks": 22528, 00:21:03.119 "percent": 34 00:21:03.119 } 00:21:03.119 }, 00:21:03.119 "base_bdevs_list": [ 00:21:03.119 { 00:21:03.119 "name": "spare", 00:21:03.119 "uuid": "498697fc-9be0-55fd-b18a-cfecaad1acff", 00:21:03.119 "is_configured": true, 00:21:03.119 "data_offset": 0, 00:21:03.119 "data_size": 65536 00:21:03.119 }, 00:21:03.119 { 00:21:03.119 "name": null, 00:21:03.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.119 "is_configured": false, 00:21:03.119 "data_offset": 0, 00:21:03.119 "data_size": 65536 00:21:03.119 }, 00:21:03.119 { 00:21:03.119 "name": "BaseBdev3", 00:21:03.119 "uuid": "aeaed14b-daea-4dfa-a16d-ac5f574e5ded", 00:21:03.119 "is_configured": true, 00:21:03.119 "data_offset": 0, 00:21:03.119 "data_size": 65536 00:21:03.119 }, 00:21:03.119 { 00:21:03.119 "name": "BaseBdev4", 00:21:03.119 "uuid": "e10d689b-f232-4cd0-8b1f-276b2bfd1f17", 00:21:03.119 "is_configured": true, 00:21:03.119 "data_offset": 0, 00:21:03.119 "data_size": 65536 00:21:03.119 } 00:21:03.119 ] 00:21:03.119 }' 00:21:03.119 21:17:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:03.119 21:17:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:03.119 21:17:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:03.377 21:17:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.377 21:17:25 -- bdev/bdev_raid.sh@657 -- # local timeout=505 00:21:03.377 21:17:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:03.377 21:17:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:03.377 21:17:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:03.377 21:17:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:03.377 21:17:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:03.377 21:17:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:03.377 21:17:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.377 21:17:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.377 [2024-06-07 21:17:25.842556] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:03.377 21:17:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:03.377 "name": "raid_bdev1", 00:21:03.377 "uuid": "cf0c62f6-84cd-4dfb-b8cd-7d4b12fb2ada", 00:21:03.377 "strip_size_kb": 0, 00:21:03.377 "state": "online", 00:21:03.377 "raid_level": "raid1", 00:21:03.377 "superblock": false, 00:21:03.377 "num_base_bdevs": 4, 00:21:03.377 "num_base_bdevs_discovered": 3, 00:21:03.377 "num_base_bdevs_operational": 3, 00:21:03.377 "process": { 00:21:03.377 "type": "rebuild", 00:21:03.377 "target": "spare", 00:21:03.377 "progress": { 00:21:03.377 "blocks": 28672, 00:21:03.377 "percent": 43 00:21:03.377 } 00:21:03.377 }, 00:21:03.377 "base_bdevs_list": [ 00:21:03.377 { 00:21:03.377 "name": "spare", 00:21:03.377 "uuid": "498697fc-9be0-55fd-b18a-cfecaad1acff", 00:21:03.377 "is_configured": true, 00:21:03.377 "data_offset": 0, 00:21:03.377 "data_size": 65536 00:21:03.377 }, 00:21:03.377 { 00:21:03.377 "name": null, 00:21:03.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.378 "is_configured": false, 00:21:03.378 "data_offset": 0, 00:21:03.378 "data_size": 65536 00:21:03.378 }, 00:21:03.378 { 00:21:03.378 "name": "BaseBdev3", 00:21:03.378 "uuid": "aeaed14b-daea-4dfa-a16d-ac5f574e5ded", 00:21:03.378 "is_configured": true, 00:21:03.378 "data_offset": 0, 00:21:03.378 "data_size": 65536 00:21:03.378 }, 00:21:03.378 { 00:21:03.378 "name": "BaseBdev4", 00:21:03.378 "uuid": "e10d689b-f232-4cd0-8b1f-276b2bfd1f17", 00:21:03.378 "is_configured": true, 00:21:03.378 "data_offset": 0, 00:21:03.378 "data_size": 65536 00:21:03.378 } 00:21:03.378 ] 00:21:03.378 }' 00:21:03.378 21:17:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:03.636 21:17:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:03.636 21:17:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:03.636 21:17:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.636 21:17:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:04.204 [2024-06-07 21:17:26.870594] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:21:04.204 [2024-06-07 21:17:26.871438] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:21:04.463 [2024-06-07 21:17:26.979059] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:04.463 [2024-06-07 21:17:26.979500] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:04.463 21:17:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:04.463 21:17:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.463 21:17:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:04.463 21:17:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:04.463 21:17:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:04.463 21:17:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:04.463 21:17:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.463 21:17:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.722 [2024-06-07 21:17:27.292874] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:04.722 21:17:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:04.722 "name": "raid_bdev1", 00:21:04.722 "uuid": "cf0c62f6-84cd-4dfb-b8cd-7d4b12fb2ada", 00:21:04.722 "strip_size_kb": 0, 00:21:04.722 "state": "online", 00:21:04.722 "raid_level": "raid1", 00:21:04.722 "superblock": false, 00:21:04.722 "num_base_bdevs": 4, 00:21:04.722 "num_base_bdevs_discovered": 3, 00:21:04.722 "num_base_bdevs_operational": 3, 00:21:04.722 "process": { 00:21:04.722 "type": "rebuild", 00:21:04.722 "target": "spare", 00:21:04.722 "progress": { 00:21:04.722 "blocks": 51200, 00:21:04.722 "percent": 78 00:21:04.722 } 00:21:04.722 }, 00:21:04.722 "base_bdevs_list": [ 00:21:04.722 { 00:21:04.722 "name": "spare", 00:21:04.722 "uuid": "498697fc-9be0-55fd-b18a-cfecaad1acff", 00:21:04.722 "is_configured": true, 00:21:04.722 "data_offset": 0, 00:21:04.722 "data_size": 65536 00:21:04.722 }, 00:21:04.722 { 00:21:04.722 "name": null, 00:21:04.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.722 "is_configured": false, 00:21:04.722 "data_offset": 0, 00:21:04.722 "data_size": 65536 00:21:04.722 }, 00:21:04.722 { 00:21:04.722 "name": "BaseBdev3", 00:21:04.722 "uuid": "aeaed14b-daea-4dfa-a16d-ac5f574e5ded", 00:21:04.722 "is_configured": true, 00:21:04.722 "data_offset": 0, 00:21:04.722 "data_size": 65536 00:21:04.722 }, 00:21:04.722 { 00:21:04.722 "name": "BaseBdev4", 00:21:04.722 "uuid": "e10d689b-f232-4cd0-8b1f-276b2bfd1f17", 00:21:04.722 "is_configured": true, 00:21:04.722 "data_offset": 0, 00:21:04.722 "data_size": 65536 00:21:04.722 } 00:21:04.722 ] 00:21:04.722 }' 00:21:04.722 21:17:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:04.980 [2024-06-07 21:17:27.409026] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:04.980 21:17:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:04.980 21:17:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:04.980 21:17:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:04.980 21:17:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:05.238 [2024-06-07 21:17:27.741724] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:05.804 [2024-06-07 21:17:28.176867] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:05.804 [2024-06-07 21:17:28.282944] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:05.804 [2024-06-07 21:17:28.285055] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.061 21:17:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:06.061 21:17:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.061 21:17:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:06.061 21:17:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:06.061 21:17:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:06.061 21:17:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:06.061 21:17:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.061 21:17:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.061 21:17:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:06.061 "name": "raid_bdev1", 00:21:06.061 "uuid": "cf0c62f6-84cd-4dfb-b8cd-7d4b12fb2ada", 00:21:06.061 "strip_size_kb": 0, 00:21:06.061 "state": "online", 00:21:06.061 "raid_level": "raid1", 00:21:06.061 "superblock": false, 00:21:06.061 "num_base_bdevs": 4, 00:21:06.061 "num_base_bdevs_discovered": 3, 00:21:06.062 "num_base_bdevs_operational": 3, 00:21:06.062 "base_bdevs_list": [ 00:21:06.062 { 00:21:06.062 "name": "spare", 00:21:06.062 "uuid": "498697fc-9be0-55fd-b18a-cfecaad1acff", 00:21:06.062 "is_configured": true, 00:21:06.062 "data_offset": 0, 00:21:06.062 "data_size": 65536 00:21:06.062 }, 00:21:06.062 { 00:21:06.062 "name": null, 00:21:06.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.062 "is_configured": false, 00:21:06.062 "data_offset": 0, 00:21:06.062 "data_size": 65536 00:21:06.062 }, 00:21:06.062 { 00:21:06.062 "name": "BaseBdev3", 00:21:06.062 "uuid": "aeaed14b-daea-4dfa-a16d-ac5f574e5ded", 00:21:06.062 "is_configured": true, 00:21:06.062 "data_offset": 0, 00:21:06.062 "data_size": 65536 00:21:06.062 }, 00:21:06.062 { 00:21:06.062 "name": "BaseBdev4", 00:21:06.062 "uuid": "e10d689b-f232-4cd0-8b1f-276b2bfd1f17", 00:21:06.062 "is_configured": true, 00:21:06.062 "data_offset": 0, 00:21:06.062 "data_size": 65536 00:21:06.062 } 00:21:06.062 ] 00:21:06.062 }' 00:21:06.062 21:17:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:06.320 21:17:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:06.320 21:17:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:06.320 21:17:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:06.320 21:17:28 -- bdev/bdev_raid.sh@660 -- # break 00:21:06.320 21:17:28 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:06.320 21:17:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:06.320 21:17:28 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:06.320 21:17:28 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:06.320 21:17:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:06.320 21:17:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.320 21:17:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.577 21:17:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:06.577 "name": "raid_bdev1", 00:21:06.577 "uuid": "cf0c62f6-84cd-4dfb-b8cd-7d4b12fb2ada", 00:21:06.577 "strip_size_kb": 0, 00:21:06.577 "state": "online", 00:21:06.577 "raid_level": "raid1", 00:21:06.577 "superblock": false, 00:21:06.577 "num_base_bdevs": 4, 00:21:06.577 "num_base_bdevs_discovered": 3, 00:21:06.577 "num_base_bdevs_operational": 3, 00:21:06.577 "base_bdevs_list": [ 00:21:06.577 { 00:21:06.577 "name": "spare", 00:21:06.577 "uuid": "498697fc-9be0-55fd-b18a-cfecaad1acff", 00:21:06.577 "is_configured": true, 00:21:06.577 "data_offset": 0, 00:21:06.577 "data_size": 65536 00:21:06.577 }, 00:21:06.577 { 00:21:06.577 "name": null, 00:21:06.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.577 "is_configured": false, 00:21:06.577 "data_offset": 0, 00:21:06.577 "data_size": 65536 00:21:06.577 }, 00:21:06.578 { 00:21:06.578 "name": "BaseBdev3", 00:21:06.578 "uuid": "aeaed14b-daea-4dfa-a16d-ac5f574e5ded", 00:21:06.578 "is_configured": true, 00:21:06.578 "data_offset": 0, 00:21:06.578 "data_size": 65536 00:21:06.578 }, 00:21:06.578 { 00:21:06.578 "name": "BaseBdev4", 00:21:06.578 "uuid": "e10d689b-f232-4cd0-8b1f-276b2bfd1f17", 00:21:06.578 "is_configured": true, 00:21:06.578 "data_offset": 0, 00:21:06.578 "data_size": 65536 00:21:06.578 } 00:21:06.578 ] 00:21:06.578 }' 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.578 21:17:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.835 21:17:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:06.835 "name": "raid_bdev1", 00:21:06.835 "uuid": "cf0c62f6-84cd-4dfb-b8cd-7d4b12fb2ada", 00:21:06.835 "strip_size_kb": 0, 00:21:06.835 "state": "online", 00:21:06.835 "raid_level": "raid1", 00:21:06.835 "superblock": false, 00:21:06.835 "num_base_bdevs": 4, 00:21:06.835 "num_base_bdevs_discovered": 3, 00:21:06.835 "num_base_bdevs_operational": 3, 00:21:06.835 "base_bdevs_list": [ 00:21:06.835 { 00:21:06.835 "name": "spare", 00:21:06.835 "uuid": "498697fc-9be0-55fd-b18a-cfecaad1acff", 00:21:06.835 "is_configured": true, 00:21:06.835 "data_offset": 0, 00:21:06.835 "data_size": 65536 00:21:06.835 }, 00:21:06.835 { 00:21:06.835 "name": null, 00:21:06.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.836 "is_configured": false, 00:21:06.836 "data_offset": 0, 00:21:06.836 "data_size": 65536 00:21:06.836 }, 00:21:06.836 { 00:21:06.836 "name": "BaseBdev3", 00:21:06.836 "uuid": "aeaed14b-daea-4dfa-a16d-ac5f574e5ded", 00:21:06.836 "is_configured": true, 00:21:06.836 "data_offset": 0, 00:21:06.836 "data_size": 65536 00:21:06.836 }, 00:21:06.836 { 00:21:06.836 "name": "BaseBdev4", 00:21:06.836 "uuid": "e10d689b-f232-4cd0-8b1f-276b2bfd1f17", 00:21:06.836 "is_configured": true, 00:21:06.836 "data_offset": 0, 00:21:06.836 "data_size": 65536 00:21:06.836 } 00:21:06.836 ] 00:21:06.836 }' 00:21:06.836 21:17:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:06.836 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:21:07.402 21:17:30 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:07.660 [2024-06-07 21:17:30.265767] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:07.660 [2024-06-07 21:17:30.266025] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:07.918 00:21:07.918 Latency(us) 00:21:07.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.918 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:07.918 raid_bdev1 : 11.32 105.11 315.33 0.00 0.00 13189.38 283.00 122016.12 00:21:07.918 =================================================================================================================== 00:21:07.918 Total : 105.11 315.33 0.00 0.00 13189.38 283.00 122016.12 00:21:07.918 [2024-06-07 21:17:30.369470] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.918 [2024-06-07 21:17:30.369660] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:07.918 0 00:21:07.918 [2024-06-07 21:17:30.369805] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:07.918 [2024-06-07 21:17:30.369823] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:21:07.918 21:17:30 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.918 21:17:30 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:08.176 21:17:30 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:08.176 21:17:30 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:08.176 21:17:30 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:08.176 21:17:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:08.176 21:17:30 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:08.176 21:17:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:08.176 21:17:30 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:08.176 21:17:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:08.176 21:17:30 -- bdev/nbd_common.sh@12 -- # local i 00:21:08.176 21:17:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:08.176 21:17:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:08.176 21:17:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:08.434 /dev/nbd0 00:21:08.434 21:17:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:08.434 21:17:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:08.434 21:17:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:08.434 21:17:30 -- common/autotest_common.sh@857 -- # local i 00:21:08.434 21:17:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:08.434 21:17:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:08.434 21:17:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:08.434 21:17:30 -- common/autotest_common.sh@861 -- # break 00:21:08.434 21:17:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:08.434 21:17:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:08.434 21:17:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:08.434 1+0 records in 00:21:08.434 1+0 records out 00:21:08.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436438 s, 9.4 MB/s 00:21:08.434 21:17:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.434 21:17:30 -- common/autotest_common.sh@874 -- # size=4096 00:21:08.434 21:17:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.434 21:17:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:08.434 21:17:30 -- common/autotest_common.sh@877 -- # return 0 00:21:08.434 21:17:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:08.434 21:17:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:08.434 21:17:30 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:08.434 21:17:30 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:21:08.434 21:17:30 -- bdev/bdev_raid.sh@678 -- # continue 00:21:08.434 21:17:30 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:08.434 21:17:30 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:21:08.434 21:17:30 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:08.434 21:17:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:08.434 21:17:30 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:08.434 21:17:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:08.434 21:17:30 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:08.434 21:17:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:08.434 21:17:30 -- bdev/nbd_common.sh@12 -- # local i 00:21:08.434 21:17:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:08.434 21:17:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:08.434 21:17:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:08.434 /dev/nbd1 00:21:08.434 21:17:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:08.434 21:17:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:08.434 21:17:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:08.434 21:17:31 -- common/autotest_common.sh@857 -- # local i 00:21:08.434 21:17:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:08.434 21:17:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:08.434 21:17:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:08.434 21:17:31 -- common/autotest_common.sh@861 -- # break 00:21:08.434 21:17:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:08.434 21:17:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:08.434 21:17:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:08.692 1+0 records in 00:21:08.692 1+0 records out 00:21:08.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544405 s, 7.5 MB/s 00:21:08.692 21:17:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.692 21:17:31 -- common/autotest_common.sh@874 -- # size=4096 00:21:08.692 21:17:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.692 21:17:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:08.692 21:17:31 -- common/autotest_common.sh@877 -- # return 0 00:21:08.692 21:17:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:08.692 21:17:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:08.692 21:17:31 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:08.692 21:17:31 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:08.692 21:17:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:08.692 21:17:31 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:08.692 21:17:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:08.692 21:17:31 -- bdev/nbd_common.sh@51 -- # local i 00:21:08.692 21:17:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:08.692 21:17:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@41 -- # break 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@45 -- # return 0 00:21:08.951 21:17:31 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:08.951 21:17:31 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:21:08.951 21:17:31 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@12 -- # local i 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:08.951 21:17:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:09.209 /dev/nbd1 00:21:09.209 21:17:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:09.209 21:17:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:09.209 21:17:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:09.209 21:17:31 -- common/autotest_common.sh@857 -- # local i 00:21:09.209 21:17:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:09.209 21:17:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:09.209 21:17:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:09.209 21:17:31 -- common/autotest_common.sh@861 -- # break 00:21:09.209 21:17:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:09.209 21:17:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:09.209 21:17:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:09.209 1+0 records in 00:21:09.209 1+0 records out 00:21:09.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561422 s, 7.3 MB/s 00:21:09.209 21:17:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:09.209 21:17:31 -- common/autotest_common.sh@874 -- # size=4096 00:21:09.209 21:17:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:09.209 21:17:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:09.209 21:17:31 -- common/autotest_common.sh@877 -- # return 0 00:21:09.209 21:17:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:09.209 21:17:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:09.209 21:17:31 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:09.467 21:17:31 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:09.467 21:17:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:09.467 21:17:31 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:09.467 21:17:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:09.467 21:17:31 -- bdev/nbd_common.sh@51 -- # local i 00:21:09.467 21:17:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:09.467 21:17:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@41 -- # break 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@45 -- # return 0 00:21:09.726 21:17:32 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@51 -- # local i 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:09.726 21:17:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:09.984 21:17:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:09.984 21:17:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:09.984 21:17:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:09.984 21:17:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:09.984 21:17:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:09.984 21:17:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:09.984 21:17:32 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:09.984 21:17:32 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:09.984 21:17:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:09.984 21:17:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:09.984 21:17:32 -- bdev/nbd_common.sh@41 -- # break 00:21:09.984 21:17:32 -- bdev/nbd_common.sh@45 -- # return 0 00:21:09.984 21:17:32 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:09.984 21:17:32 -- bdev/bdev_raid.sh@709 -- # killprocess 139525 00:21:09.984 21:17:32 -- common/autotest_common.sh@926 -- # '[' -z 139525 ']' 00:21:09.984 21:17:32 -- common/autotest_common.sh@930 -- # kill -0 139525 00:21:09.984 21:17:32 -- common/autotest_common.sh@931 -- # uname 00:21:09.984 21:17:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:09.984 21:17:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 139525 00:21:10.241 21:17:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:10.242 21:17:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:10.242 21:17:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 139525' 00:21:10.242 killing process with pid 139525 00:21:10.242 21:17:32 -- common/autotest_common.sh@945 -- # kill 139525 00:21:10.242 Received shutdown signal, test time was about 13.629527 seconds 00:21:10.242 00:21:10.242 Latency(us) 00:21:10.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.242 =================================================================================================================== 00:21:10.242 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.242 21:17:32 -- common/autotest_common.sh@950 -- # wait 139525 00:21:10.242 [2024-06-07 21:17:32.673984] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:10.242 [2024-06-07 21:17:32.732402] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:10.500 00:21:10.500 real 0m18.346s 00:21:10.500 user 0m29.044s 00:21:10.500 sys 0m2.351s 00:21:10.500 21:17:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.500 21:17:33 -- common/autotest_common.sh@10 -- # set +x 00:21:10.500 ************************************ 00:21:10.500 END TEST raid_rebuild_test_io 00:21:10.500 ************************************ 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:21:10.500 21:17:33 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:10.500 21:17:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:10.500 21:17:33 -- common/autotest_common.sh@10 -- # set +x 00:21:10.500 ************************************ 00:21:10.500 START TEST raid_rebuild_test_sb_io 00:21:10.500 ************************************ 00:21:10.500 21:17:33 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@544 -- # raid_pid=140075 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@545 -- # waitforlisten 140075 /var/tmp/spdk-raid.sock 00:21:10.500 21:17:33 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:10.500 21:17:33 -- common/autotest_common.sh@819 -- # '[' -z 140075 ']' 00:21:10.500 21:17:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:10.500 21:17:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:10.500 21:17:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:10.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:10.500 21:17:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:10.500 21:17:33 -- common/autotest_common.sh@10 -- # set +x 00:21:10.759 [2024-06-07 21:17:33.190747] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:10.759 [2024-06-07 21:17:33.191150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140075 ] 00:21:10.759 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:10.759 Zero copy mechanism will not be used. 00:21:10.759 [2024-06-07 21:17:33.353685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.759 [2024-06-07 21:17:33.426194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.016 [2024-06-07 21:17:33.499549] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.582 21:17:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:11.582 21:17:34 -- common/autotest_common.sh@852 -- # return 0 00:21:11.582 21:17:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:11.582 21:17:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:11.582 21:17:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:11.839 BaseBdev1_malloc 00:21:11.839 21:17:34 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:12.097 [2024-06-07 21:17:34.548545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:12.097 [2024-06-07 21:17:34.548941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.097 [2024-06-07 21:17:34.549111] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:12.097 [2024-06-07 21:17:34.549290] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.097 [2024-06-07 21:17:34.552459] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.097 [2024-06-07 21:17:34.552646] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:12.097 BaseBdev1 00:21:12.097 21:17:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:12.097 21:17:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:12.097 21:17:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:12.097 BaseBdev2_malloc 00:21:12.355 21:17:34 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:12.355 [2024-06-07 21:17:34.960774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:12.355 [2024-06-07 21:17:34.961278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.355 [2024-06-07 21:17:34.961484] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:12.355 [2024-06-07 21:17:34.961668] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.355 [2024-06-07 21:17:34.964574] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.355 [2024-06-07 21:17:34.964766] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:12.355 BaseBdev2 00:21:12.355 21:17:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:12.355 21:17:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:12.355 21:17:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:12.612 BaseBdev3_malloc 00:21:12.612 21:17:35 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:12.871 [2024-06-07 21:17:35.384036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:12.871 [2024-06-07 21:17:35.384347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.871 [2024-06-07 21:17:35.384433] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:12.871 [2024-06-07 21:17:35.384589] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.871 [2024-06-07 21:17:35.387243] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.871 [2024-06-07 21:17:35.387449] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:12.871 BaseBdev3 00:21:12.871 21:17:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:12.871 21:17:35 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:12.871 21:17:35 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:13.129 BaseBdev4_malloc 00:21:13.129 21:17:35 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:13.129 [2024-06-07 21:17:35.799210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:13.129 [2024-06-07 21:17:35.799556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.129 [2024-06-07 21:17:35.799751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:13.129 [2024-06-07 21:17:35.799916] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.129 [2024-06-07 21:17:35.802586] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.129 [2024-06-07 21:17:35.802766] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:13.129 BaseBdev4 00:21:13.388 21:17:35 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:13.388 spare_malloc 00:21:13.388 21:17:36 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:13.647 spare_delay 00:21:13.647 21:17:36 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:13.905 [2024-06-07 21:17:36.413454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:13.905 [2024-06-07 21:17:36.413883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.905 [2024-06-07 21:17:36.414047] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:13.905 [2024-06-07 21:17:36.414206] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.905 [2024-06-07 21:17:36.417077] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.905 [2024-06-07 21:17:36.417283] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:13.905 spare 00:21:13.905 21:17:36 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:14.164 [2024-06-07 21:17:36.613744] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:14.164 [2024-06-07 21:17:36.616207] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:14.164 [2024-06-07 21:17:36.616444] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:14.164 [2024-06-07 21:17:36.616546] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:14.164 [2024-06-07 21:17:36.616919] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:21:14.164 [2024-06-07 21:17:36.616970] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:14.164 [2024-06-07 21:17:36.617245] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:14.164 [2024-06-07 21:17:36.617855] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:21:14.164 [2024-06-07 21:17:36.617993] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:21:14.164 [2024-06-07 21:17:36.618329] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.164 21:17:36 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:14.164 21:17:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:14.164 21:17:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:14.164 21:17:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:14.164 21:17:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:14.164 21:17:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:14.164 21:17:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:14.164 21:17:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:14.164 21:17:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:14.164 21:17:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:14.164 21:17:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.164 21:17:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.423 21:17:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:14.423 "name": "raid_bdev1", 00:21:14.423 "uuid": "5865147e-c961-46e3-9642-6a9505baad84", 00:21:14.423 "strip_size_kb": 0, 00:21:14.423 "state": "online", 00:21:14.423 "raid_level": "raid1", 00:21:14.423 "superblock": true, 00:21:14.423 "num_base_bdevs": 4, 00:21:14.423 "num_base_bdevs_discovered": 4, 00:21:14.423 "num_base_bdevs_operational": 4, 00:21:14.423 "base_bdevs_list": [ 00:21:14.423 { 00:21:14.423 "name": "BaseBdev1", 00:21:14.423 "uuid": "5e222ae8-e031-5f8c-bc8d-26cd12bfaf4a", 00:21:14.423 "is_configured": true, 00:21:14.423 "data_offset": 2048, 00:21:14.423 "data_size": 63488 00:21:14.423 }, 00:21:14.423 { 00:21:14.423 "name": "BaseBdev2", 00:21:14.423 "uuid": "293be7ed-578c-5565-afe6-b5b7a8b2de0c", 00:21:14.423 "is_configured": true, 00:21:14.423 "data_offset": 2048, 00:21:14.423 "data_size": 63488 00:21:14.423 }, 00:21:14.423 { 00:21:14.423 "name": "BaseBdev3", 00:21:14.423 "uuid": "a8b52fe6-4eeb-541e-ab84-78962fded2b0", 00:21:14.423 "is_configured": true, 00:21:14.423 "data_offset": 2048, 00:21:14.423 "data_size": 63488 00:21:14.423 }, 00:21:14.423 { 00:21:14.423 "name": "BaseBdev4", 00:21:14.423 "uuid": "59910f4e-5a80-589a-8a55-a5b369d42d01", 00:21:14.423 "is_configured": true, 00:21:14.423 "data_offset": 2048, 00:21:14.423 "data_size": 63488 00:21:14.423 } 00:21:14.423 ] 00:21:14.423 }' 00:21:14.423 21:17:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:14.423 21:17:36 -- common/autotest_common.sh@10 -- # set +x 00:21:14.990 21:17:37 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:14.990 21:17:37 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:15.249 [2024-06-07 21:17:37.698897] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:15.249 21:17:37 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:15.249 21:17:37 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.249 21:17:37 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:15.507 21:17:37 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:15.507 21:17:37 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:15.507 21:17:37 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:15.507 21:17:37 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:15.507 [2024-06-07 21:17:38.022001] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:15.507 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:15.507 Zero copy mechanism will not be used. 00:21:15.507 Running I/O for 60 seconds... 00:21:15.507 [2024-06-07 21:17:38.115487] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:15.507 [2024-06-07 21:17:38.129359] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:21:15.507 21:17:38 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:15.507 21:17:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:15.507 21:17:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:15.507 21:17:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:15.507 21:17:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:15.507 21:17:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:15.507 21:17:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:15.507 21:17:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:15.507 21:17:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:15.507 21:17:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:15.508 21:17:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.508 21:17:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.766 21:17:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:15.766 "name": "raid_bdev1", 00:21:15.766 "uuid": "5865147e-c961-46e3-9642-6a9505baad84", 00:21:15.766 "strip_size_kb": 0, 00:21:15.766 "state": "online", 00:21:15.766 "raid_level": "raid1", 00:21:15.766 "superblock": true, 00:21:15.766 "num_base_bdevs": 4, 00:21:15.766 "num_base_bdevs_discovered": 3, 00:21:15.766 "num_base_bdevs_operational": 3, 00:21:15.766 "base_bdevs_list": [ 00:21:15.766 { 00:21:15.766 "name": null, 00:21:15.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.766 "is_configured": false, 00:21:15.766 "data_offset": 2048, 00:21:15.766 "data_size": 63488 00:21:15.766 }, 00:21:15.766 { 00:21:15.766 "name": "BaseBdev2", 00:21:15.766 "uuid": "293be7ed-578c-5565-afe6-b5b7a8b2de0c", 00:21:15.766 "is_configured": true, 00:21:15.766 "data_offset": 2048, 00:21:15.766 "data_size": 63488 00:21:15.766 }, 00:21:15.766 { 00:21:15.766 "name": "BaseBdev3", 00:21:15.766 "uuid": "a8b52fe6-4eeb-541e-ab84-78962fded2b0", 00:21:15.766 "is_configured": true, 00:21:15.766 "data_offset": 2048, 00:21:15.766 "data_size": 63488 00:21:15.766 }, 00:21:15.766 { 00:21:15.766 "name": "BaseBdev4", 00:21:15.766 "uuid": "59910f4e-5a80-589a-8a55-a5b369d42d01", 00:21:15.766 "is_configured": true, 00:21:15.767 "data_offset": 2048, 00:21:15.767 "data_size": 63488 00:21:15.767 } 00:21:15.767 ] 00:21:15.767 }' 00:21:15.767 21:17:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:15.767 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:21:16.701 21:17:39 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:16.701 [2024-06-07 21:17:39.294911] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:16.701 [2024-06-07 21:17:39.296834] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:16.701 21:17:39 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:16.701 [2024-06-07 21:17:39.329357] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:16.701 [2024-06-07 21:17:39.332242] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:16.960 [2024-06-07 21:17:39.453798] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:16.960 [2024-06-07 21:17:39.454935] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:16.960 [2024-06-07 21:17:39.585585] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:16.960 [2024-06-07 21:17:39.586730] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:17.527 [2024-06-07 21:17:39.960149] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:17.527 [2024-06-07 21:17:39.962146] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:17.527 [2024-06-07 21:17:40.164942] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:17.527 [2024-06-07 21:17:40.165562] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:17.786 21:17:40 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.786 21:17:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:17.786 21:17:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:17.786 21:17:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:17.786 21:17:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:17.786 21:17:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.786 21:17:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.786 [2024-06-07 21:17:40.435602] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:18.045 21:17:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:18.045 "name": "raid_bdev1", 00:21:18.045 "uuid": "5865147e-c961-46e3-9642-6a9505baad84", 00:21:18.045 "strip_size_kb": 0, 00:21:18.045 "state": "online", 00:21:18.045 "raid_level": "raid1", 00:21:18.045 "superblock": true, 00:21:18.045 "num_base_bdevs": 4, 00:21:18.045 "num_base_bdevs_discovered": 4, 00:21:18.045 "num_base_bdevs_operational": 4, 00:21:18.045 "process": { 00:21:18.045 "type": "rebuild", 00:21:18.045 "target": "spare", 00:21:18.045 "progress": { 00:21:18.045 "blocks": 14336, 00:21:18.045 "percent": 22 00:21:18.045 } 00:21:18.045 }, 00:21:18.045 "base_bdevs_list": [ 00:21:18.045 { 00:21:18.045 "name": "spare", 00:21:18.045 "uuid": "32d927ba-b5e4-562e-a227-ee00dd828c9a", 00:21:18.045 "is_configured": true, 00:21:18.045 "data_offset": 2048, 00:21:18.045 "data_size": 63488 00:21:18.045 }, 00:21:18.045 { 00:21:18.045 "name": "BaseBdev2", 00:21:18.045 "uuid": "293be7ed-578c-5565-afe6-b5b7a8b2de0c", 00:21:18.045 "is_configured": true, 00:21:18.045 "data_offset": 2048, 00:21:18.045 "data_size": 63488 00:21:18.045 }, 00:21:18.045 { 00:21:18.045 "name": "BaseBdev3", 00:21:18.045 "uuid": "a8b52fe6-4eeb-541e-ab84-78962fded2b0", 00:21:18.045 "is_configured": true, 00:21:18.045 "data_offset": 2048, 00:21:18.045 "data_size": 63488 00:21:18.045 }, 00:21:18.045 { 00:21:18.045 "name": "BaseBdev4", 00:21:18.045 "uuid": "59910f4e-5a80-589a-8a55-a5b369d42d01", 00:21:18.045 "is_configured": true, 00:21:18.045 "data_offset": 2048, 00:21:18.045 "data_size": 63488 00:21:18.045 } 00:21:18.045 ] 00:21:18.045 }' 00:21:18.045 21:17:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:18.045 21:17:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:18.045 21:17:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:18.045 21:17:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:18.045 21:17:40 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:18.304 [2024-06-07 21:17:40.838954] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:18.304 [2024-06-07 21:17:40.926670] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:18.563 [2024-06-07 21:17:41.088380] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:18.563 [2024-06-07 21:17:41.094775] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.563 [2024-06-07 21:17:41.131493] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:21:18.563 21:17:41 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:18.563 21:17:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:18.563 21:17:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:18.563 21:17:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:18.563 21:17:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:18.563 21:17:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:18.563 21:17:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:18.563 21:17:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:18.563 21:17:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:18.563 21:17:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:18.563 21:17:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.563 21:17:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.821 21:17:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:18.821 "name": "raid_bdev1", 00:21:18.821 "uuid": "5865147e-c961-46e3-9642-6a9505baad84", 00:21:18.821 "strip_size_kb": 0, 00:21:18.821 "state": "online", 00:21:18.821 "raid_level": "raid1", 00:21:18.821 "superblock": true, 00:21:18.821 "num_base_bdevs": 4, 00:21:18.821 "num_base_bdevs_discovered": 3, 00:21:18.821 "num_base_bdevs_operational": 3, 00:21:18.821 "base_bdevs_list": [ 00:21:18.821 { 00:21:18.821 "name": null, 00:21:18.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.821 "is_configured": false, 00:21:18.821 "data_offset": 2048, 00:21:18.821 "data_size": 63488 00:21:18.821 }, 00:21:18.821 { 00:21:18.821 "name": "BaseBdev2", 00:21:18.821 "uuid": "293be7ed-578c-5565-afe6-b5b7a8b2de0c", 00:21:18.821 "is_configured": true, 00:21:18.821 "data_offset": 2048, 00:21:18.821 "data_size": 63488 00:21:18.821 }, 00:21:18.821 { 00:21:18.821 "name": "BaseBdev3", 00:21:18.821 "uuid": "a8b52fe6-4eeb-541e-ab84-78962fded2b0", 00:21:18.821 "is_configured": true, 00:21:18.821 "data_offset": 2048, 00:21:18.821 "data_size": 63488 00:21:18.821 }, 00:21:18.821 { 00:21:18.821 "name": "BaseBdev4", 00:21:18.821 "uuid": "59910f4e-5a80-589a-8a55-a5b369d42d01", 00:21:18.821 "is_configured": true, 00:21:18.821 "data_offset": 2048, 00:21:18.821 "data_size": 63488 00:21:18.821 } 00:21:18.821 ] 00:21:18.821 }' 00:21:18.821 21:17:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:18.821 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:21:19.758 21:17:42 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:19.758 21:17:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:19.758 21:17:42 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:19.758 21:17:42 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:19.758 21:17:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:19.758 21:17:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.758 21:17:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.758 21:17:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:19.758 "name": "raid_bdev1", 00:21:19.758 "uuid": "5865147e-c961-46e3-9642-6a9505baad84", 00:21:19.758 "strip_size_kb": 0, 00:21:19.758 "state": "online", 00:21:19.758 "raid_level": "raid1", 00:21:19.758 "superblock": true, 00:21:19.758 "num_base_bdevs": 4, 00:21:19.758 "num_base_bdevs_discovered": 3, 00:21:19.758 "num_base_bdevs_operational": 3, 00:21:19.758 "base_bdevs_list": [ 00:21:19.758 { 00:21:19.758 "name": null, 00:21:19.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.758 "is_configured": false, 00:21:19.758 "data_offset": 2048, 00:21:19.758 "data_size": 63488 00:21:19.758 }, 00:21:19.758 { 00:21:19.758 "name": "BaseBdev2", 00:21:19.758 "uuid": "293be7ed-578c-5565-afe6-b5b7a8b2de0c", 00:21:19.758 "is_configured": true, 00:21:19.758 "data_offset": 2048, 00:21:19.758 "data_size": 63488 00:21:19.758 }, 00:21:19.758 { 00:21:19.758 "name": "BaseBdev3", 00:21:19.758 "uuid": "a8b52fe6-4eeb-541e-ab84-78962fded2b0", 00:21:19.758 "is_configured": true, 00:21:19.758 "data_offset": 2048, 00:21:19.758 "data_size": 63488 00:21:19.758 }, 00:21:19.758 { 00:21:19.758 "name": "BaseBdev4", 00:21:19.758 "uuid": "59910f4e-5a80-589a-8a55-a5b369d42d01", 00:21:19.758 "is_configured": true, 00:21:19.758 "data_offset": 2048, 00:21:19.758 "data_size": 63488 00:21:19.758 } 00:21:19.758 ] 00:21:19.758 }' 00:21:19.758 21:17:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:20.017 21:17:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:20.017 21:17:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:20.017 21:17:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:20.017 21:17:42 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:20.275 [2024-06-07 21:17:42.734992] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:20.275 [2024-06-07 21:17:42.735450] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.275 21:17:42 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:20.275 [2024-06-07 21:17:42.810114] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:20.275 [2024-06-07 21:17:42.813109] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:20.533 [2024-06-07 21:17:42.953704] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:20.533 [2024-06-07 21:17:42.955710] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:20.533 [2024-06-07 21:17:43.171672] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:20.533 [2024-06-07 21:17:43.172417] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:21.097 [2024-06-07 21:17:43.581662] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:21.097 [2024-06-07 21:17:43.582800] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:21.354 21:17:43 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.354 21:17:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:21.354 21:17:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:21.354 21:17:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:21.354 21:17:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:21.354 21:17:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.354 21:17:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.354 [2024-06-07 21:17:43.924046] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:21.354 [2024-06-07 21:17:43.925924] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:21.354 21:17:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:21.354 "name": "raid_bdev1", 00:21:21.354 "uuid": "5865147e-c961-46e3-9642-6a9505baad84", 00:21:21.354 "strip_size_kb": 0, 00:21:21.354 "state": "online", 00:21:21.354 "raid_level": "raid1", 00:21:21.354 "superblock": true, 00:21:21.354 "num_base_bdevs": 4, 00:21:21.354 "num_base_bdevs_discovered": 4, 00:21:21.354 "num_base_bdevs_operational": 4, 00:21:21.354 "process": { 00:21:21.354 "type": "rebuild", 00:21:21.354 "target": "spare", 00:21:21.354 "progress": { 00:21:21.354 "blocks": 14336, 00:21:21.354 "percent": 22 00:21:21.354 } 00:21:21.354 }, 00:21:21.354 "base_bdevs_list": [ 00:21:21.354 { 00:21:21.354 "name": "spare", 00:21:21.354 "uuid": "32d927ba-b5e4-562e-a227-ee00dd828c9a", 00:21:21.354 "is_configured": true, 00:21:21.355 "data_offset": 2048, 00:21:21.355 "data_size": 63488 00:21:21.355 }, 00:21:21.355 { 00:21:21.355 "name": "BaseBdev2", 00:21:21.355 "uuid": "293be7ed-578c-5565-afe6-b5b7a8b2de0c", 00:21:21.355 "is_configured": true, 00:21:21.355 "data_offset": 2048, 00:21:21.355 "data_size": 63488 00:21:21.355 }, 00:21:21.355 { 00:21:21.355 "name": "BaseBdev3", 00:21:21.355 "uuid": "a8b52fe6-4eeb-541e-ab84-78962fded2b0", 00:21:21.355 "is_configured": true, 00:21:21.355 "data_offset": 2048, 00:21:21.355 "data_size": 63488 00:21:21.355 }, 00:21:21.355 { 00:21:21.355 "name": "BaseBdev4", 00:21:21.355 "uuid": "59910f4e-5a80-589a-8a55-a5b369d42d01", 00:21:21.355 "is_configured": true, 00:21:21.355 "data_offset": 2048, 00:21:21.355 "data_size": 63488 00:21:21.355 } 00:21:21.355 ] 00:21:21.355 }' 00:21:21.355 21:17:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:21.612 21:17:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.612 21:17:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:21.612 21:17:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.612 21:17:44 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:21.612 21:17:44 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:21.612 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:21.612 21:17:44 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:21.613 21:17:44 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:21.613 21:17:44 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:21.613 21:17:44 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:21.613 [2024-06-07 21:17:44.159050] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:21.613 [2024-06-07 21:17:44.159705] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:21.871 [2024-06-07 21:17:44.351127] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:21.871 [2024-06-07 21:17:44.525242] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005c70 00:21:21.871 [2024-06-07 21:17:44.525606] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ee0 00:21:22.129 [2024-06-07 21:17:44.650119] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:22.129 21:17:44 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:22.129 21:17:44 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:22.129 21:17:44 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.129 21:17:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:22.129 21:17:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:22.129 21:17:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:22.129 21:17:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:22.129 21:17:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.129 21:17:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.386 21:17:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:22.386 "name": "raid_bdev1", 00:21:22.386 "uuid": "5865147e-c961-46e3-9642-6a9505baad84", 00:21:22.386 "strip_size_kb": 0, 00:21:22.387 "state": "online", 00:21:22.387 "raid_level": "raid1", 00:21:22.387 "superblock": true, 00:21:22.387 "num_base_bdevs": 4, 00:21:22.387 "num_base_bdevs_discovered": 3, 00:21:22.387 "num_base_bdevs_operational": 3, 00:21:22.387 "process": { 00:21:22.387 "type": "rebuild", 00:21:22.387 "target": "spare", 00:21:22.387 "progress": { 00:21:22.387 "blocks": 22528, 00:21:22.387 "percent": 35 00:21:22.387 } 00:21:22.387 }, 00:21:22.387 "base_bdevs_list": [ 00:21:22.387 { 00:21:22.387 "name": "spare", 00:21:22.387 "uuid": "32d927ba-b5e4-562e-a227-ee00dd828c9a", 00:21:22.387 "is_configured": true, 00:21:22.387 "data_offset": 2048, 00:21:22.387 "data_size": 63488 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "name": null, 00:21:22.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.387 "is_configured": false, 00:21:22.387 "data_offset": 2048, 00:21:22.387 "data_size": 63488 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "name": "BaseBdev3", 00:21:22.387 "uuid": "a8b52fe6-4eeb-541e-ab84-78962fded2b0", 00:21:22.387 "is_configured": true, 00:21:22.387 "data_offset": 2048, 00:21:22.387 "data_size": 63488 00:21:22.387 }, 00:21:22.387 { 00:21:22.387 "name": "BaseBdev4", 00:21:22.387 "uuid": "59910f4e-5a80-589a-8a55-a5b369d42d01", 00:21:22.387 "is_configured": true, 00:21:22.387 "data_offset": 2048, 00:21:22.387 "data_size": 63488 00:21:22.387 } 00:21:22.387 ] 00:21:22.387 }' 00:21:22.387 21:17:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:22.387 21:17:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.387 21:17:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:22.387 [2024-06-07 21:17:45.019568] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:22.387 21:17:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.387 21:17:45 -- bdev/bdev_raid.sh@657 -- # local timeout=525 00:21:22.387 21:17:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:22.387 21:17:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.387 21:17:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:22.387 21:17:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:22.387 21:17:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:22.387 21:17:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:22.387 21:17:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.387 21:17:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.645 21:17:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:22.645 "name": "raid_bdev1", 00:21:22.645 "uuid": "5865147e-c961-46e3-9642-6a9505baad84", 00:21:22.645 "strip_size_kb": 0, 00:21:22.645 "state": "online", 00:21:22.645 "raid_level": "raid1", 00:21:22.645 "superblock": true, 00:21:22.645 "num_base_bdevs": 4, 00:21:22.645 "num_base_bdevs_discovered": 3, 00:21:22.645 "num_base_bdevs_operational": 3, 00:21:22.645 "process": { 00:21:22.645 "type": "rebuild", 00:21:22.645 "target": "spare", 00:21:22.645 "progress": { 00:21:22.645 "blocks": 28672, 00:21:22.645 "percent": 45 00:21:22.645 } 00:21:22.645 }, 00:21:22.645 "base_bdevs_list": [ 00:21:22.645 { 00:21:22.645 "name": "spare", 00:21:22.645 "uuid": "32d927ba-b5e4-562e-a227-ee00dd828c9a", 00:21:22.645 "is_configured": true, 00:21:22.645 "data_offset": 2048, 00:21:22.645 "data_size": 63488 00:21:22.645 }, 00:21:22.645 { 00:21:22.645 "name": null, 00:21:22.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.645 "is_configured": false, 00:21:22.645 "data_offset": 2048, 00:21:22.645 "data_size": 63488 00:21:22.645 }, 00:21:22.645 { 00:21:22.645 "name": "BaseBdev3", 00:21:22.645 "uuid": "a8b52fe6-4eeb-541e-ab84-78962fded2b0", 00:21:22.645 "is_configured": true, 00:21:22.645 "data_offset": 2048, 00:21:22.645 "data_size": 63488 00:21:22.645 }, 00:21:22.645 { 00:21:22.645 "name": "BaseBdev4", 00:21:22.645 "uuid": "59910f4e-5a80-589a-8a55-a5b369d42d01", 00:21:22.645 "is_configured": true, 00:21:22.645 "data_offset": 2048, 00:21:22.645 "data_size": 63488 00:21:22.645 } 00:21:22.645 ] 00:21:22.645 }' 00:21:22.645 21:17:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:22.645 21:17:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.645 21:17:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:22.903 [2024-06-07 21:17:45.360367] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:22.903 21:17:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.903 21:17:45 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:23.161 [2024-06-07 21:17:45.787716] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:23.418 [2024-06-07 21:17:45.919592] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:23.983 21:17:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:23.983 21:17:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.983 21:17:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:23.983 21:17:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:23.983 21:17:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:23.983 21:17:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:23.983 21:17:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.983 21:17:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.983 21:17:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:23.983 "name": "raid_bdev1", 00:21:23.983 "uuid": "5865147e-c961-46e3-9642-6a9505baad84", 00:21:23.983 "strip_size_kb": 0, 00:21:23.983 "state": "online", 00:21:23.983 "raid_level": "raid1", 00:21:23.983 "superblock": true, 00:21:23.983 "num_base_bdevs": 4, 00:21:23.983 "num_base_bdevs_discovered": 3, 00:21:23.983 "num_base_bdevs_operational": 3, 00:21:23.983 "process": { 00:21:23.983 "type": "rebuild", 00:21:23.983 "target": "spare", 00:21:23.983 "progress": { 00:21:23.983 "blocks": 49152, 00:21:23.983 "percent": 77 00:21:23.983 } 00:21:23.983 }, 00:21:23.983 "base_bdevs_list": [ 00:21:23.983 { 00:21:23.983 "name": "spare", 00:21:23.983 "uuid": "32d927ba-b5e4-562e-a227-ee00dd828c9a", 00:21:23.983 "is_configured": true, 00:21:23.983 "data_offset": 2048, 00:21:23.983 "data_size": 63488 00:21:23.983 }, 00:21:23.983 { 00:21:23.983 "name": null, 00:21:23.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.983 "is_configured": false, 00:21:23.983 "data_offset": 2048, 00:21:23.983 "data_size": 63488 00:21:23.983 }, 00:21:23.983 { 00:21:23.983 "name": "BaseBdev3", 00:21:23.983 "uuid": "a8b52fe6-4eeb-541e-ab84-78962fded2b0", 00:21:23.983 "is_configured": true, 00:21:23.983 "data_offset": 2048, 00:21:23.983 "data_size": 63488 00:21:23.983 }, 00:21:23.983 { 00:21:23.983 "name": "BaseBdev4", 00:21:23.983 "uuid": "59910f4e-5a80-589a-8a55-a5b369d42d01", 00:21:23.983 "is_configured": true, 00:21:23.983 "data_offset": 2048, 00:21:23.983 "data_size": 63488 00:21:23.983 } 00:21:23.983 ] 00:21:23.983 }' 00:21:23.983 21:17:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:23.983 [2024-06-07 21:17:46.634353] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:24.241 21:17:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:24.241 21:17:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:24.241 21:17:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.241 21:17:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:24.241 [2024-06-07 21:17:46.861941] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:24.241 [2024-06-07 21:17:46.862491] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:24.808 [2024-06-07 21:17:47.316406] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:25.066 [2024-06-07 21:17:47.638119] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:25.066 21:17:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:25.066 21:17:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:25.066 21:17:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:25.066 21:17:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:25.066 21:17:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:25.066 21:17:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:25.066 21:17:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.067 21:17:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.067 [2024-06-07 21:17:47.738051] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:25.067 [2024-06-07 21:17:47.741510] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.325 21:17:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:25.325 "name": "raid_bdev1", 00:21:25.325 "uuid": "5865147e-c961-46e3-9642-6a9505baad84", 00:21:25.325 "strip_size_kb": 0, 00:21:25.325 "state": "online", 00:21:25.325 "raid_level": "raid1", 00:21:25.325 "superblock": true, 00:21:25.325 "num_base_bdevs": 4, 00:21:25.325 "num_base_bdevs_discovered": 3, 00:21:25.325 "num_base_bdevs_operational": 3, 00:21:25.325 "base_bdevs_list": [ 00:21:25.325 { 00:21:25.325 "name": "spare", 00:21:25.325 "uuid": "32d927ba-b5e4-562e-a227-ee00dd828c9a", 00:21:25.325 "is_configured": true, 00:21:25.325 "data_offset": 2048, 00:21:25.325 "data_size": 63488 00:21:25.325 }, 00:21:25.325 { 00:21:25.325 "name": null, 00:21:25.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.325 "is_configured": false, 00:21:25.325 "data_offset": 2048, 00:21:25.325 "data_size": 63488 00:21:25.325 }, 00:21:25.325 { 00:21:25.325 "name": "BaseBdev3", 00:21:25.325 "uuid": "a8b52fe6-4eeb-541e-ab84-78962fded2b0", 00:21:25.325 "is_configured": true, 00:21:25.325 "data_offset": 2048, 00:21:25.325 "data_size": 63488 00:21:25.325 }, 00:21:25.325 { 00:21:25.325 "name": "BaseBdev4", 00:21:25.325 "uuid": "59910f4e-5a80-589a-8a55-a5b369d42d01", 00:21:25.325 "is_configured": true, 00:21:25.325 "data_offset": 2048, 00:21:25.325 "data_size": 63488 00:21:25.325 } 00:21:25.325 ] 00:21:25.325 }' 00:21:25.325 21:17:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:25.582 21:17:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:25.582 21:17:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:25.582 21:17:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:25.582 21:17:48 -- bdev/bdev_raid.sh@660 -- # break 00:21:25.582 21:17:48 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:25.582 21:17:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:25.582 21:17:48 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:25.582 21:17:48 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:25.582 21:17:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:25.582 21:17:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.582 21:17:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.840 21:17:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:25.840 "name": "raid_bdev1", 00:21:25.840 "uuid": "5865147e-c961-46e3-9642-6a9505baad84", 00:21:25.840 "strip_size_kb": 0, 00:21:25.840 "state": "online", 00:21:25.840 "raid_level": "raid1", 00:21:25.840 "superblock": true, 00:21:25.840 "num_base_bdevs": 4, 00:21:25.840 "num_base_bdevs_discovered": 3, 00:21:25.840 "num_base_bdevs_operational": 3, 00:21:25.840 "base_bdevs_list": [ 00:21:25.840 { 00:21:25.840 "name": "spare", 00:21:25.840 "uuid": "32d927ba-b5e4-562e-a227-ee00dd828c9a", 00:21:25.840 "is_configured": true, 00:21:25.840 "data_offset": 2048, 00:21:25.840 "data_size": 63488 00:21:25.840 }, 00:21:25.840 { 00:21:25.840 "name": null, 00:21:25.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.840 "is_configured": false, 00:21:25.840 "data_offset": 2048, 00:21:25.840 "data_size": 63488 00:21:25.840 }, 00:21:25.840 { 00:21:25.841 "name": "BaseBdev3", 00:21:25.841 "uuid": "a8b52fe6-4eeb-541e-ab84-78962fded2b0", 00:21:25.841 "is_configured": true, 00:21:25.841 "data_offset": 2048, 00:21:25.841 "data_size": 63488 00:21:25.841 }, 00:21:25.841 { 00:21:25.841 "name": "BaseBdev4", 00:21:25.841 "uuid": "59910f4e-5a80-589a-8a55-a5b369d42d01", 00:21:25.841 "is_configured": true, 00:21:25.841 "data_offset": 2048, 00:21:25.841 "data_size": 63488 00:21:25.841 } 00:21:25.841 ] 00:21:25.841 }' 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.841 21:17:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.099 21:17:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:26.099 "name": "raid_bdev1", 00:21:26.099 "uuid": "5865147e-c961-46e3-9642-6a9505baad84", 00:21:26.099 "strip_size_kb": 0, 00:21:26.099 "state": "online", 00:21:26.099 "raid_level": "raid1", 00:21:26.099 "superblock": true, 00:21:26.099 "num_base_bdevs": 4, 00:21:26.099 "num_base_bdevs_discovered": 3, 00:21:26.099 "num_base_bdevs_operational": 3, 00:21:26.099 "base_bdevs_list": [ 00:21:26.099 { 00:21:26.099 "name": "spare", 00:21:26.099 "uuid": "32d927ba-b5e4-562e-a227-ee00dd828c9a", 00:21:26.099 "is_configured": true, 00:21:26.099 "data_offset": 2048, 00:21:26.099 "data_size": 63488 00:21:26.099 }, 00:21:26.099 { 00:21:26.099 "name": null, 00:21:26.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.099 "is_configured": false, 00:21:26.099 "data_offset": 2048, 00:21:26.099 "data_size": 63488 00:21:26.099 }, 00:21:26.099 { 00:21:26.099 "name": "BaseBdev3", 00:21:26.099 "uuid": "a8b52fe6-4eeb-541e-ab84-78962fded2b0", 00:21:26.099 "is_configured": true, 00:21:26.099 "data_offset": 2048, 00:21:26.099 "data_size": 63488 00:21:26.099 }, 00:21:26.099 { 00:21:26.099 "name": "BaseBdev4", 00:21:26.099 "uuid": "59910f4e-5a80-589a-8a55-a5b369d42d01", 00:21:26.099 "is_configured": true, 00:21:26.099 "data_offset": 2048, 00:21:26.099 "data_size": 63488 00:21:26.099 } 00:21:26.099 ] 00:21:26.099 }' 00:21:26.099 21:17:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:26.099 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:21:27.034 21:17:49 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:27.034 [2024-06-07 21:17:49.563189] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:27.034 [2024-06-07 21:17:49.565258] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:27.034 00:21:27.034 Latency(us) 00:21:27.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.034 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:27.034 raid_bdev1 : 11.61 92.90 278.71 0.00 0.00 14720.89 296.03 117726.49 00:21:27.034 =================================================================================================================== 00:21:27.034 Total : 92.90 278.71 0.00 0.00 14720.89 296.03 117726.49 00:21:27.034 [2024-06-07 21:17:49.643881] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.034 [2024-06-07 21:17:49.644084] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:27.034 0 00:21:27.034 [2024-06-07 21:17:49.644278] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:27.034 [2024-06-07 21:17:49.644306] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:21:27.034 21:17:49 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.034 21:17:49 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:27.292 21:17:49 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:27.292 21:17:49 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:27.292 21:17:49 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:27.292 21:17:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:27.293 21:17:49 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:27.293 21:17:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:27.293 21:17:49 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:27.293 21:17:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:27.293 21:17:49 -- bdev/nbd_common.sh@12 -- # local i 00:21:27.293 21:17:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:27.293 21:17:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:27.293 21:17:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:27.551 /dev/nbd0 00:21:27.810 21:17:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:27.810 21:17:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:27.810 21:17:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:27.810 21:17:50 -- common/autotest_common.sh@857 -- # local i 00:21:27.810 21:17:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:27.810 21:17:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:27.810 21:17:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:27.810 21:17:50 -- common/autotest_common.sh@861 -- # break 00:21:27.810 21:17:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:27.810 21:17:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:27.810 21:17:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:27.810 1+0 records in 00:21:27.810 1+0 records out 00:21:27.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344309 s, 11.9 MB/s 00:21:27.810 21:17:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:27.810 21:17:50 -- common/autotest_common.sh@874 -- # size=4096 00:21:27.810 21:17:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:27.810 21:17:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:27.810 21:17:50 -- common/autotest_common.sh@877 -- # return 0 00:21:27.810 21:17:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:27.810 21:17:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:27.810 21:17:50 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:27.810 21:17:50 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:21:27.810 21:17:50 -- bdev/bdev_raid.sh@678 -- # continue 00:21:27.810 21:17:50 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:27.810 21:17:50 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:21:27.810 21:17:50 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:27.810 21:17:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:27.810 21:17:50 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:27.810 21:17:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:27.810 21:17:50 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:27.810 21:17:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:27.810 21:17:50 -- bdev/nbd_common.sh@12 -- # local i 00:21:27.810 21:17:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:27.810 21:17:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:27.810 21:17:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:28.069 /dev/nbd1 00:21:28.069 21:17:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:28.069 21:17:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:28.069 21:17:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:28.069 21:17:50 -- common/autotest_common.sh@857 -- # local i 00:21:28.069 21:17:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:28.069 21:17:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:28.069 21:17:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:28.069 21:17:50 -- common/autotest_common.sh@861 -- # break 00:21:28.069 21:17:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:28.069 21:17:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:28.069 21:17:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:28.069 1+0 records in 00:21:28.069 1+0 records out 00:21:28.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388287 s, 10.5 MB/s 00:21:28.069 21:17:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.069 21:17:50 -- common/autotest_common.sh@874 -- # size=4096 00:21:28.069 21:17:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.069 21:17:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:28.069 21:17:50 -- common/autotest_common.sh@877 -- # return 0 00:21:28.069 21:17:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:28.069 21:17:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:28.069 21:17:50 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:28.069 21:17:50 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:28.069 21:17:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:28.069 21:17:50 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:28.069 21:17:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:28.069 21:17:50 -- bdev/nbd_common.sh@51 -- # local i 00:21:28.069 21:17:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:28.069 21:17:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:28.327 21:17:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:28.327 21:17:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:28.327 21:17:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:28.327 21:17:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@41 -- # break 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@45 -- # return 0 00:21:28.328 21:17:50 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:28.328 21:17:50 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:21:28.328 21:17:50 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@12 -- # local i 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:28.328 21:17:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:28.586 /dev/nbd1 00:21:28.586 21:17:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:28.586 21:17:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:28.586 21:17:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:28.586 21:17:51 -- common/autotest_common.sh@857 -- # local i 00:21:28.586 21:17:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:28.586 21:17:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:28.586 21:17:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:28.586 21:17:51 -- common/autotest_common.sh@861 -- # break 00:21:28.586 21:17:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:28.586 21:17:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:28.586 21:17:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:28.586 1+0 records in 00:21:28.586 1+0 records out 00:21:28.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248277 s, 16.5 MB/s 00:21:28.586 21:17:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.586 21:17:51 -- common/autotest_common.sh@874 -- # size=4096 00:21:28.586 21:17:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.586 21:17:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:28.586 21:17:51 -- common/autotest_common.sh@877 -- # return 0 00:21:28.586 21:17:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:28.586 21:17:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:28.586 21:17:51 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:28.843 21:17:51 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:28.843 21:17:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:28.843 21:17:51 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:28.843 21:17:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:28.843 21:17:51 -- bdev/nbd_common.sh@51 -- # local i 00:21:28.843 21:17:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:28.843 21:17:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@41 -- # break 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.101 21:17:51 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@51 -- # local i 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.101 21:17:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:29.360 21:17:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:29.360 21:17:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:29.360 21:17:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:29.360 21:17:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.360 21:17:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.360 21:17:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:29.360 21:17:51 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:29.360 21:17:51 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:29.360 21:17:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.360 21:17:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:29.360 21:17:51 -- bdev/nbd_common.sh@41 -- # break 00:21:29.360 21:17:51 -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.360 21:17:51 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:29.360 21:17:51 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:29.360 21:17:51 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:29.360 21:17:51 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:29.619 21:17:52 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:29.877 [2024-06-07 21:17:52.442741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:29.877 [2024-06-07 21:17:52.442911] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.877 [2024-06-07 21:17:52.442992] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:29.877 [2024-06-07 21:17:52.443016] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.877 [2024-06-07 21:17:52.445964] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.877 [2024-06-07 21:17:52.446033] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:29.877 [2024-06-07 21:17:52.446172] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:29.877 [2024-06-07 21:17:52.446255] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:29.877 BaseBdev1 00:21:29.877 21:17:52 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:29.877 21:17:52 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:21:29.877 21:17:52 -- bdev/bdev_raid.sh@696 -- # continue 00:21:29.877 21:17:52 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:29.877 21:17:52 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:21:29.877 21:17:52 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:21:30.136 21:17:52 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:30.394 [2024-06-07 21:17:52.954925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:30.394 [2024-06-07 21:17:52.955031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.394 [2024-06-07 21:17:52.955080] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:30.394 [2024-06-07 21:17:52.955102] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.394 [2024-06-07 21:17:52.955764] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.394 [2024-06-07 21:17:52.955873] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:30.394 [2024-06-07 21:17:52.955981] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:21:30.394 [2024-06-07 21:17:52.956006] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:21:30.394 [2024-06-07 21:17:52.956020] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:30.394 [2024-06-07 21:17:52.956058] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state configuring 00:21:30.394 [2024-06-07 21:17:52.956146] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:30.394 BaseBdev3 00:21:30.394 21:17:52 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:30.394 21:17:52 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:21:30.394 21:17:52 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:21:30.652 21:17:53 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:30.910 [2024-06-07 21:17:53.374994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:30.910 [2024-06-07 21:17:53.375159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.910 [2024-06-07 21:17:53.375198] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:21:30.910 [2024-06-07 21:17:53.375224] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.910 [2024-06-07 21:17:53.375876] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.910 [2024-06-07 21:17:53.375957] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:30.910 [2024-06-07 21:17:53.376055] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:21:30.910 [2024-06-07 21:17:53.376087] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:30.910 BaseBdev4 00:21:30.910 21:17:53 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:31.168 21:17:53 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:31.437 [2024-06-07 21:17:53.859249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:31.437 [2024-06-07 21:17:53.859419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.437 [2024-06-07 21:17:53.859463] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:21:31.437 [2024-06-07 21:17:53.859504] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.437 [2024-06-07 21:17:53.860173] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.437 [2024-06-07 21:17:53.860267] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:31.437 [2024-06-07 21:17:53.860381] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:31.437 [2024-06-07 21:17:53.860428] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:31.437 spare 00:21:31.437 21:17:53 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:31.437 21:17:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:31.437 21:17:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:31.437 21:17:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:31.437 21:17:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:31.437 21:17:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:31.437 21:17:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:31.437 21:17:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:31.437 21:17:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:31.437 21:17:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:31.437 21:17:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.437 21:17:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.437 [2024-06-07 21:17:53.960597] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c680 00:21:31.437 [2024-06-07 21:17:53.960633] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:31.437 [2024-06-07 21:17:53.960901] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000039110 00:21:31.437 [2024-06-07 21:17:53.961567] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c680 00:21:31.438 [2024-06-07 21:17:53.961589] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c680 00:21:31.438 [2024-06-07 21:17:53.961782] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:31.438 21:17:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:31.438 "name": "raid_bdev1", 00:21:31.438 "uuid": "5865147e-c961-46e3-9642-6a9505baad84", 00:21:31.438 "strip_size_kb": 0, 00:21:31.438 "state": "online", 00:21:31.438 "raid_level": "raid1", 00:21:31.438 "superblock": true, 00:21:31.438 "num_base_bdevs": 4, 00:21:31.438 "num_base_bdevs_discovered": 3, 00:21:31.438 "num_base_bdevs_operational": 3, 00:21:31.438 "base_bdevs_list": [ 00:21:31.438 { 00:21:31.438 "name": "spare", 00:21:31.438 "uuid": "32d927ba-b5e4-562e-a227-ee00dd828c9a", 00:21:31.438 "is_configured": true, 00:21:31.438 "data_offset": 2048, 00:21:31.438 "data_size": 63488 00:21:31.438 }, 00:21:31.438 { 00:21:31.438 "name": null, 00:21:31.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.438 "is_configured": false, 00:21:31.438 "data_offset": 2048, 00:21:31.438 "data_size": 63488 00:21:31.438 }, 00:21:31.438 { 00:21:31.438 "name": "BaseBdev3", 00:21:31.438 "uuid": "a8b52fe6-4eeb-541e-ab84-78962fded2b0", 00:21:31.438 "is_configured": true, 00:21:31.438 "data_offset": 2048, 00:21:31.438 "data_size": 63488 00:21:31.438 }, 00:21:31.438 { 00:21:31.438 "name": "BaseBdev4", 00:21:31.438 "uuid": "59910f4e-5a80-589a-8a55-a5b369d42d01", 00:21:31.438 "is_configured": true, 00:21:31.438 "data_offset": 2048, 00:21:31.438 "data_size": 63488 00:21:31.438 } 00:21:31.438 ] 00:21:31.438 }' 00:21:31.438 21:17:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:31.438 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:21:32.391 21:17:54 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:32.391 21:17:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:32.391 21:17:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:32.391 21:17:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:32.391 21:17:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:32.391 21:17:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.391 21:17:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.391 21:17:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:32.391 "name": "raid_bdev1", 00:21:32.391 "uuid": "5865147e-c961-46e3-9642-6a9505baad84", 00:21:32.391 "strip_size_kb": 0, 00:21:32.391 "state": "online", 00:21:32.391 "raid_level": "raid1", 00:21:32.391 "superblock": true, 00:21:32.391 "num_base_bdevs": 4, 00:21:32.391 "num_base_bdevs_discovered": 3, 00:21:32.391 "num_base_bdevs_operational": 3, 00:21:32.391 "base_bdevs_list": [ 00:21:32.391 { 00:21:32.391 "name": "spare", 00:21:32.391 "uuid": "32d927ba-b5e4-562e-a227-ee00dd828c9a", 00:21:32.391 "is_configured": true, 00:21:32.391 "data_offset": 2048, 00:21:32.391 "data_size": 63488 00:21:32.391 }, 00:21:32.391 { 00:21:32.391 "name": null, 00:21:32.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.391 "is_configured": false, 00:21:32.391 "data_offset": 2048, 00:21:32.391 "data_size": 63488 00:21:32.391 }, 00:21:32.391 { 00:21:32.391 "name": "BaseBdev3", 00:21:32.391 "uuid": "a8b52fe6-4eeb-541e-ab84-78962fded2b0", 00:21:32.391 "is_configured": true, 00:21:32.391 "data_offset": 2048, 00:21:32.391 "data_size": 63488 00:21:32.391 }, 00:21:32.391 { 00:21:32.391 "name": "BaseBdev4", 00:21:32.391 "uuid": "59910f4e-5a80-589a-8a55-a5b369d42d01", 00:21:32.391 "is_configured": true, 00:21:32.391 "data_offset": 2048, 00:21:32.391 "data_size": 63488 00:21:32.391 } 00:21:32.391 ] 00:21:32.391 }' 00:21:32.391 21:17:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:32.391 21:17:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:32.391 21:17:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:32.650 21:17:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:32.650 21:17:55 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.650 21:17:55 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:32.908 21:17:55 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.908 21:17:55 -- bdev/bdev_raid.sh@709 -- # killprocess 140075 00:21:32.908 21:17:55 -- common/autotest_common.sh@926 -- # '[' -z 140075 ']' 00:21:32.908 21:17:55 -- common/autotest_common.sh@930 -- # kill -0 140075 00:21:32.908 21:17:55 -- common/autotest_common.sh@931 -- # uname 00:21:32.908 21:17:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:32.908 21:17:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140075 00:21:32.908 killing process with pid 140075 00:21:32.908 Received shutdown signal, test time was about 17.343126 seconds 00:21:32.908 00:21:32.908 Latency(us) 00:21:32.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.908 =================================================================================================================== 00:21:32.908 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:32.908 21:17:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:32.908 21:17:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:32.908 21:17:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140075' 00:21:32.908 21:17:55 -- common/autotest_common.sh@945 -- # kill 140075 00:21:32.908 21:17:55 -- common/autotest_common.sh@950 -- # wait 140075 00:21:32.908 [2024-06-07 21:17:55.367990] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:32.908 [2024-06-07 21:17:55.368172] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:32.908 [2024-06-07 21:17:55.368342] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:32.908 [2024-06-07 21:17:55.368376] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c680 name raid_bdev1, state offline 00:21:32.908 [2024-06-07 21:17:55.412136] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:33.167 ************************************ 00:21:33.167 END TEST raid_rebuild_test_sb_io 00:21:33.167 ************************************ 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:33.167 00:21:33.167 real 0m22.530s 00:21:33.167 user 0m37.008s 00:21:33.167 sys 0m2.725s 00:21:33.167 21:17:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.167 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:21:33.167 21:17:55 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:21:33.167 21:17:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:33.167 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:21:33.167 ************************************ 00:21:33.167 START TEST raid5f_state_function_test 00:21:33.167 ************************************ 00:21:33.167 21:17:55 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 false 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@226 -- # raid_pid=140724 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 140724' 00:21:33.167 Process raid pid: 140724 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@228 -- # waitforlisten 140724 /var/tmp/spdk-raid.sock 00:21:33.167 21:17:55 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:33.167 21:17:55 -- common/autotest_common.sh@819 -- # '[' -z 140724 ']' 00:21:33.167 21:17:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:33.167 21:17:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:33.167 21:17:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:33.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:33.167 21:17:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:33.167 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:21:33.167 [2024-06-07 21:17:55.770682] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:33.167 [2024-06-07 21:17:55.770914] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.425 [2024-06-07 21:17:55.922969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.425 [2024-06-07 21:17:56.004160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.425 [2024-06-07 21:17:56.061805] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:34.359 21:17:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:34.359 21:17:56 -- common/autotest_common.sh@852 -- # return 0 00:21:34.359 21:17:56 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:34.359 [2024-06-07 21:17:56.972385] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:34.359 [2024-06-07 21:17:56.972482] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:34.359 [2024-06-07 21:17:56.972513] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:34.359 [2024-06-07 21:17:56.972532] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:34.359 [2024-06-07 21:17:56.972540] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:34.359 [2024-06-07 21:17:56.972580] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:34.359 21:17:56 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:34.359 21:17:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:34.359 21:17:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:34.359 21:17:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:34.359 21:17:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:34.359 21:17:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:34.359 21:17:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:34.359 21:17:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:34.359 21:17:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:34.359 21:17:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:34.359 21:17:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.359 21:17:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.619 21:17:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:34.619 "name": "Existed_Raid", 00:21:34.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.619 "strip_size_kb": 64, 00:21:34.619 "state": "configuring", 00:21:34.619 "raid_level": "raid5f", 00:21:34.619 "superblock": false, 00:21:34.619 "num_base_bdevs": 3, 00:21:34.619 "num_base_bdevs_discovered": 0, 00:21:34.619 "num_base_bdevs_operational": 3, 00:21:34.619 "base_bdevs_list": [ 00:21:34.619 { 00:21:34.619 "name": "BaseBdev1", 00:21:34.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.619 "is_configured": false, 00:21:34.619 "data_offset": 0, 00:21:34.619 "data_size": 0 00:21:34.619 }, 00:21:34.619 { 00:21:34.619 "name": "BaseBdev2", 00:21:34.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.619 "is_configured": false, 00:21:34.619 "data_offset": 0, 00:21:34.619 "data_size": 0 00:21:34.619 }, 00:21:34.619 { 00:21:34.619 "name": "BaseBdev3", 00:21:34.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.619 "is_configured": false, 00:21:34.619 "data_offset": 0, 00:21:34.619 "data_size": 0 00:21:34.619 } 00:21:34.619 ] 00:21:34.619 }' 00:21:34.619 21:17:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:34.619 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:21:35.552 21:17:57 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:35.552 [2024-06-07 21:17:58.096519] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:35.552 [2024-06-07 21:17:58.096576] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:21:35.552 21:17:58 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:35.810 [2024-06-07 21:17:58.352609] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:35.810 [2024-06-07 21:17:58.352700] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:35.810 [2024-06-07 21:17:58.352739] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:35.810 [2024-06-07 21:17:58.352793] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:35.810 [2024-06-07 21:17:58.352826] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:35.810 [2024-06-07 21:17:58.352873] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:35.810 21:17:58 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:36.067 [2024-06-07 21:17:58.577678] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:36.067 BaseBdev1 00:21:36.067 21:17:58 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:36.067 21:17:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:21:36.067 21:17:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:36.067 21:17:58 -- common/autotest_common.sh@889 -- # local i 00:21:36.067 21:17:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:36.067 21:17:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:36.067 21:17:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:36.326 21:17:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:36.326 [ 00:21:36.326 { 00:21:36.326 "name": "BaseBdev1", 00:21:36.326 "aliases": [ 00:21:36.326 "d2c1b2df-8931-4b4c-ba8b-14d27332946e" 00:21:36.326 ], 00:21:36.326 "product_name": "Malloc disk", 00:21:36.326 "block_size": 512, 00:21:36.326 "num_blocks": 65536, 00:21:36.326 "uuid": "d2c1b2df-8931-4b4c-ba8b-14d27332946e", 00:21:36.326 "assigned_rate_limits": { 00:21:36.326 "rw_ios_per_sec": 0, 00:21:36.326 "rw_mbytes_per_sec": 0, 00:21:36.326 "r_mbytes_per_sec": 0, 00:21:36.326 "w_mbytes_per_sec": 0 00:21:36.326 }, 00:21:36.326 "claimed": true, 00:21:36.326 "claim_type": "exclusive_write", 00:21:36.326 "zoned": false, 00:21:36.326 "supported_io_types": { 00:21:36.326 "read": true, 00:21:36.326 "write": true, 00:21:36.326 "unmap": true, 00:21:36.326 "write_zeroes": true, 00:21:36.326 "flush": true, 00:21:36.326 "reset": true, 00:21:36.326 "compare": false, 00:21:36.326 "compare_and_write": false, 00:21:36.326 "abort": true, 00:21:36.326 "nvme_admin": false, 00:21:36.326 "nvme_io": false 00:21:36.326 }, 00:21:36.326 "memory_domains": [ 00:21:36.326 { 00:21:36.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.326 "dma_device_type": 2 00:21:36.326 } 00:21:36.326 ], 00:21:36.326 "driver_specific": {} 00:21:36.326 } 00:21:36.326 ] 00:21:36.586 21:17:59 -- common/autotest_common.sh@895 -- # return 0 00:21:36.586 21:17:59 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:36.586 21:17:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:36.586 21:17:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:36.586 21:17:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:36.586 21:17:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:36.586 21:17:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:36.586 21:17:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:36.586 21:17:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:36.586 21:17:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:36.586 21:17:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:36.586 21:17:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.586 21:17:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.586 21:17:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:36.586 "name": "Existed_Raid", 00:21:36.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.586 "strip_size_kb": 64, 00:21:36.586 "state": "configuring", 00:21:36.586 "raid_level": "raid5f", 00:21:36.586 "superblock": false, 00:21:36.586 "num_base_bdevs": 3, 00:21:36.586 "num_base_bdevs_discovered": 1, 00:21:36.586 "num_base_bdevs_operational": 3, 00:21:36.586 "base_bdevs_list": [ 00:21:36.586 { 00:21:36.586 "name": "BaseBdev1", 00:21:36.586 "uuid": "d2c1b2df-8931-4b4c-ba8b-14d27332946e", 00:21:36.586 "is_configured": true, 00:21:36.586 "data_offset": 0, 00:21:36.586 "data_size": 65536 00:21:36.586 }, 00:21:36.586 { 00:21:36.586 "name": "BaseBdev2", 00:21:36.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.586 "is_configured": false, 00:21:36.586 "data_offset": 0, 00:21:36.586 "data_size": 0 00:21:36.586 }, 00:21:36.586 { 00:21:36.586 "name": "BaseBdev3", 00:21:36.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.586 "is_configured": false, 00:21:36.586 "data_offset": 0, 00:21:36.586 "data_size": 0 00:21:36.586 } 00:21:36.586 ] 00:21:36.586 }' 00:21:36.586 21:17:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:36.586 21:17:59 -- common/autotest_common.sh@10 -- # set +x 00:21:37.521 21:17:59 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:37.521 [2024-06-07 21:18:00.182147] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:37.521 [2024-06-07 21:18:00.182238] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:21:37.781 21:18:00 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:21:37.781 21:18:00 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:37.781 [2024-06-07 21:18:00.402258] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:37.781 [2024-06-07 21:18:00.404478] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.781 [2024-06-07 21:18:00.404555] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.782 [2024-06-07 21:18:00.404584] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:37.782 [2024-06-07 21:18:00.404610] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:37.782 21:18:00 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:37.782 21:18:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:37.782 21:18:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:37.782 21:18:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:37.782 21:18:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:37.782 21:18:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:37.782 21:18:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:37.782 21:18:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:37.782 21:18:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:37.782 21:18:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:37.782 21:18:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:37.782 21:18:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:37.782 21:18:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.782 21:18:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.040 21:18:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.040 "name": "Existed_Raid", 00:21:38.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.040 "strip_size_kb": 64, 00:21:38.040 "state": "configuring", 00:21:38.040 "raid_level": "raid5f", 00:21:38.040 "superblock": false, 00:21:38.040 "num_base_bdevs": 3, 00:21:38.040 "num_base_bdevs_discovered": 1, 00:21:38.040 "num_base_bdevs_operational": 3, 00:21:38.040 "base_bdevs_list": [ 00:21:38.040 { 00:21:38.040 "name": "BaseBdev1", 00:21:38.040 "uuid": "d2c1b2df-8931-4b4c-ba8b-14d27332946e", 00:21:38.040 "is_configured": true, 00:21:38.040 "data_offset": 0, 00:21:38.040 "data_size": 65536 00:21:38.040 }, 00:21:38.040 { 00:21:38.040 "name": "BaseBdev2", 00:21:38.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.040 "is_configured": false, 00:21:38.040 "data_offset": 0, 00:21:38.040 "data_size": 0 00:21:38.040 }, 00:21:38.040 { 00:21:38.040 "name": "BaseBdev3", 00:21:38.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.040 "is_configured": false, 00:21:38.040 "data_offset": 0, 00:21:38.040 "data_size": 0 00:21:38.040 } 00:21:38.040 ] 00:21:38.040 }' 00:21:38.040 21:18:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.040 21:18:00 -- common/autotest_common.sh@10 -- # set +x 00:21:38.974 21:18:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:38.974 [2024-06-07 21:18:01.645254] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:38.974 BaseBdev2 00:21:39.232 21:18:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:39.233 21:18:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:21:39.233 21:18:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:39.233 21:18:01 -- common/autotest_common.sh@889 -- # local i 00:21:39.233 21:18:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:39.233 21:18:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:39.233 21:18:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:39.233 21:18:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:39.491 [ 00:21:39.491 { 00:21:39.491 "name": "BaseBdev2", 00:21:39.491 "aliases": [ 00:21:39.491 "e241b470-0fb9-4b66-b9d0-094ca13ef694" 00:21:39.491 ], 00:21:39.491 "product_name": "Malloc disk", 00:21:39.491 "block_size": 512, 00:21:39.491 "num_blocks": 65536, 00:21:39.491 "uuid": "e241b470-0fb9-4b66-b9d0-094ca13ef694", 00:21:39.491 "assigned_rate_limits": { 00:21:39.491 "rw_ios_per_sec": 0, 00:21:39.491 "rw_mbytes_per_sec": 0, 00:21:39.491 "r_mbytes_per_sec": 0, 00:21:39.491 "w_mbytes_per_sec": 0 00:21:39.491 }, 00:21:39.491 "claimed": true, 00:21:39.491 "claim_type": "exclusive_write", 00:21:39.491 "zoned": false, 00:21:39.491 "supported_io_types": { 00:21:39.491 "read": true, 00:21:39.491 "write": true, 00:21:39.491 "unmap": true, 00:21:39.491 "write_zeroes": true, 00:21:39.491 "flush": true, 00:21:39.491 "reset": true, 00:21:39.491 "compare": false, 00:21:39.491 "compare_and_write": false, 00:21:39.491 "abort": true, 00:21:39.491 "nvme_admin": false, 00:21:39.491 "nvme_io": false 00:21:39.491 }, 00:21:39.491 "memory_domains": [ 00:21:39.491 { 00:21:39.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.491 "dma_device_type": 2 00:21:39.491 } 00:21:39.491 ], 00:21:39.491 "driver_specific": {} 00:21:39.491 } 00:21:39.491 ] 00:21:39.491 21:18:02 -- common/autotest_common.sh@895 -- # return 0 00:21:39.491 21:18:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:39.491 21:18:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:39.491 21:18:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:39.491 21:18:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:39.491 21:18:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:39.491 21:18:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:39.491 21:18:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:39.491 21:18:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:39.491 21:18:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:39.491 21:18:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:39.491 21:18:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:39.491 21:18:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:39.491 21:18:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.491 21:18:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.749 21:18:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:39.749 "name": "Existed_Raid", 00:21:39.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.749 "strip_size_kb": 64, 00:21:39.749 "state": "configuring", 00:21:39.749 "raid_level": "raid5f", 00:21:39.749 "superblock": false, 00:21:39.749 "num_base_bdevs": 3, 00:21:39.749 "num_base_bdevs_discovered": 2, 00:21:39.749 "num_base_bdevs_operational": 3, 00:21:39.749 "base_bdevs_list": [ 00:21:39.749 { 00:21:39.749 "name": "BaseBdev1", 00:21:39.749 "uuid": "d2c1b2df-8931-4b4c-ba8b-14d27332946e", 00:21:39.749 "is_configured": true, 00:21:39.749 "data_offset": 0, 00:21:39.749 "data_size": 65536 00:21:39.749 }, 00:21:39.749 { 00:21:39.749 "name": "BaseBdev2", 00:21:39.749 "uuid": "e241b470-0fb9-4b66-b9d0-094ca13ef694", 00:21:39.749 "is_configured": true, 00:21:39.749 "data_offset": 0, 00:21:39.749 "data_size": 65536 00:21:39.749 }, 00:21:39.749 { 00:21:39.749 "name": "BaseBdev3", 00:21:39.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.749 "is_configured": false, 00:21:39.749 "data_offset": 0, 00:21:39.749 "data_size": 0 00:21:39.749 } 00:21:39.749 ] 00:21:39.749 }' 00:21:39.750 21:18:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:39.750 21:18:02 -- common/autotest_common.sh@10 -- # set +x 00:21:40.316 21:18:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:40.575 [2024-06-07 21:18:03.203222] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:40.575 [2024-06-07 21:18:03.203282] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:21:40.575 [2024-06-07 21:18:03.203291] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:40.575 [2024-06-07 21:18:03.203787] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:21:40.575 [2024-06-07 21:18:03.205151] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:21:40.575 [2024-06-07 21:18:03.205174] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:21:40.575 [2024-06-07 21:18:03.205573] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.575 BaseBdev3 00:21:40.575 21:18:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:40.575 21:18:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:21:40.575 21:18:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:40.575 21:18:03 -- common/autotest_common.sh@889 -- # local i 00:21:40.575 21:18:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:40.575 21:18:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:40.575 21:18:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:40.832 21:18:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:41.090 [ 00:21:41.090 { 00:21:41.090 "name": "BaseBdev3", 00:21:41.090 "aliases": [ 00:21:41.090 "99430da5-c2d6-4848-9238-9fc6d6fd82f6" 00:21:41.090 ], 00:21:41.090 "product_name": "Malloc disk", 00:21:41.090 "block_size": 512, 00:21:41.090 "num_blocks": 65536, 00:21:41.090 "uuid": "99430da5-c2d6-4848-9238-9fc6d6fd82f6", 00:21:41.090 "assigned_rate_limits": { 00:21:41.090 "rw_ios_per_sec": 0, 00:21:41.090 "rw_mbytes_per_sec": 0, 00:21:41.090 "r_mbytes_per_sec": 0, 00:21:41.090 "w_mbytes_per_sec": 0 00:21:41.090 }, 00:21:41.090 "claimed": true, 00:21:41.090 "claim_type": "exclusive_write", 00:21:41.090 "zoned": false, 00:21:41.090 "supported_io_types": { 00:21:41.090 "read": true, 00:21:41.090 "write": true, 00:21:41.090 "unmap": true, 00:21:41.090 "write_zeroes": true, 00:21:41.090 "flush": true, 00:21:41.090 "reset": true, 00:21:41.090 "compare": false, 00:21:41.090 "compare_and_write": false, 00:21:41.090 "abort": true, 00:21:41.090 "nvme_admin": false, 00:21:41.090 "nvme_io": false 00:21:41.090 }, 00:21:41.090 "memory_domains": [ 00:21:41.090 { 00:21:41.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.090 "dma_device_type": 2 00:21:41.090 } 00:21:41.090 ], 00:21:41.090 "driver_specific": {} 00:21:41.090 } 00:21:41.090 ] 00:21:41.090 21:18:03 -- common/autotest_common.sh@895 -- # return 0 00:21:41.090 21:18:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:41.090 21:18:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:41.090 21:18:03 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:41.090 21:18:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:41.090 21:18:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:41.090 21:18:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:41.090 21:18:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:41.090 21:18:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:41.090 21:18:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:41.090 21:18:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:41.090 21:18:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:41.090 21:18:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:41.090 21:18:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.090 21:18:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.349 21:18:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:41.349 "name": "Existed_Raid", 00:21:41.349 "uuid": "7320d9f1-3d4b-48ce-847f-147e9680f18e", 00:21:41.349 "strip_size_kb": 64, 00:21:41.349 "state": "online", 00:21:41.349 "raid_level": "raid5f", 00:21:41.349 "superblock": false, 00:21:41.349 "num_base_bdevs": 3, 00:21:41.349 "num_base_bdevs_discovered": 3, 00:21:41.349 "num_base_bdevs_operational": 3, 00:21:41.349 "base_bdevs_list": [ 00:21:41.349 { 00:21:41.349 "name": "BaseBdev1", 00:21:41.349 "uuid": "d2c1b2df-8931-4b4c-ba8b-14d27332946e", 00:21:41.349 "is_configured": true, 00:21:41.349 "data_offset": 0, 00:21:41.349 "data_size": 65536 00:21:41.349 }, 00:21:41.349 { 00:21:41.349 "name": "BaseBdev2", 00:21:41.349 "uuid": "e241b470-0fb9-4b66-b9d0-094ca13ef694", 00:21:41.349 "is_configured": true, 00:21:41.349 "data_offset": 0, 00:21:41.349 "data_size": 65536 00:21:41.349 }, 00:21:41.349 { 00:21:41.349 "name": "BaseBdev3", 00:21:41.349 "uuid": "99430da5-c2d6-4848-9238-9fc6d6fd82f6", 00:21:41.349 "is_configured": true, 00:21:41.349 "data_offset": 0, 00:21:41.349 "data_size": 65536 00:21:41.349 } 00:21:41.349 ] 00:21:41.349 }' 00:21:41.349 21:18:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:41.349 21:18:03 -- common/autotest_common.sh@10 -- # set +x 00:21:41.916 21:18:04 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:42.183 [2024-06-07 21:18:04.840243] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@196 -- # return 0 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.442 21:18:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.700 21:18:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:42.700 "name": "Existed_Raid", 00:21:42.700 "uuid": "7320d9f1-3d4b-48ce-847f-147e9680f18e", 00:21:42.700 "strip_size_kb": 64, 00:21:42.700 "state": "online", 00:21:42.700 "raid_level": "raid5f", 00:21:42.700 "superblock": false, 00:21:42.700 "num_base_bdevs": 3, 00:21:42.701 "num_base_bdevs_discovered": 2, 00:21:42.701 "num_base_bdevs_operational": 2, 00:21:42.701 "base_bdevs_list": [ 00:21:42.701 { 00:21:42.701 "name": null, 00:21:42.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.701 "is_configured": false, 00:21:42.701 "data_offset": 0, 00:21:42.701 "data_size": 65536 00:21:42.701 }, 00:21:42.701 { 00:21:42.701 "name": "BaseBdev2", 00:21:42.701 "uuid": "e241b470-0fb9-4b66-b9d0-094ca13ef694", 00:21:42.701 "is_configured": true, 00:21:42.701 "data_offset": 0, 00:21:42.701 "data_size": 65536 00:21:42.701 }, 00:21:42.701 { 00:21:42.701 "name": "BaseBdev3", 00:21:42.701 "uuid": "99430da5-c2d6-4848-9238-9fc6d6fd82f6", 00:21:42.701 "is_configured": true, 00:21:42.701 "data_offset": 0, 00:21:42.701 "data_size": 65536 00:21:42.701 } 00:21:42.701 ] 00:21:42.701 }' 00:21:42.701 21:18:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:42.701 21:18:05 -- common/autotest_common.sh@10 -- # set +x 00:21:43.268 21:18:05 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:43.268 21:18:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:43.268 21:18:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.268 21:18:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:43.527 21:18:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:43.527 21:18:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:43.527 21:18:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:43.527 [2024-06-07 21:18:06.183219] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:43.527 [2024-06-07 21:18:06.183255] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:43.527 [2024-06-07 21:18:06.183408] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.785 21:18:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:43.785 21:18:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:43.785 21:18:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.785 21:18:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:43.785 21:18:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:43.785 21:18:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:43.785 21:18:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:44.044 [2024-06-07 21:18:06.681756] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:44.044 [2024-06-07 21:18:06.681858] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:21:44.044 21:18:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:44.044 21:18:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:44.044 21:18:06 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.044 21:18:06 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:44.302 21:18:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:44.302 21:18:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:44.302 21:18:06 -- bdev/bdev_raid.sh@287 -- # killprocess 140724 00:21:44.302 21:18:06 -- common/autotest_common.sh@926 -- # '[' -z 140724 ']' 00:21:44.302 21:18:06 -- common/autotest_common.sh@930 -- # kill -0 140724 00:21:44.302 21:18:06 -- common/autotest_common.sh@931 -- # uname 00:21:44.302 21:18:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:44.302 21:18:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140724 00:21:44.302 killing process with pid 140724 00:21:44.302 21:18:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:44.302 21:18:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:44.302 21:18:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140724' 00:21:44.302 21:18:06 -- common/autotest_common.sh@945 -- # kill 140724 00:21:44.302 21:18:06 -- common/autotest_common.sh@950 -- # wait 140724 00:21:44.302 [2024-06-07 21:18:06.974605] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:44.302 [2024-06-07 21:18:06.974725] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:44.560 ************************************ 00:21:44.560 END TEST raid5f_state_function_test 00:21:44.560 ************************************ 00:21:44.560 21:18:07 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:44.560 00:21:44.560 real 0m11.487s 00:21:44.560 user 0m21.287s 00:21:44.560 sys 0m1.367s 00:21:44.560 21:18:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:44.560 21:18:07 -- common/autotest_common.sh@10 -- # set +x 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:21:44.819 21:18:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:21:44.819 21:18:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:44.819 21:18:07 -- common/autotest_common.sh@10 -- # set +x 00:21:44.819 ************************************ 00:21:44.819 START TEST raid5f_state_function_test_sb 00:21:44.819 ************************************ 00:21:44.819 21:18:07 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 true 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@226 -- # raid_pid=141110 00:21:44.819 Process raid pid: 141110 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 141110' 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:44.819 21:18:07 -- bdev/bdev_raid.sh@228 -- # waitforlisten 141110 /var/tmp/spdk-raid.sock 00:21:44.819 21:18:07 -- common/autotest_common.sh@819 -- # '[' -z 141110 ']' 00:21:44.819 21:18:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:44.819 21:18:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:44.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:44.819 21:18:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:44.819 21:18:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:44.819 21:18:07 -- common/autotest_common.sh@10 -- # set +x 00:21:44.819 [2024-06-07 21:18:07.321217] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:44.819 [2024-06-07 21:18:07.322116] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.819 [2024-06-07 21:18:07.486571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.078 [2024-06-07 21:18:07.554315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.078 [2024-06-07 21:18:07.610658] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:45.645 21:18:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:45.645 21:18:08 -- common/autotest_common.sh@852 -- # return 0 00:21:45.645 21:18:08 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:45.903 [2024-06-07 21:18:08.339676] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:45.903 [2024-06-07 21:18:08.340023] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:45.903 [2024-06-07 21:18:08.340161] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:45.903 [2024-06-07 21:18:08.340232] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:45.903 [2024-06-07 21:18:08.340459] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:45.903 [2024-06-07 21:18:08.340568] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:45.903 21:18:08 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:45.903 21:18:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:45.903 21:18:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:45.903 21:18:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:45.903 21:18:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:45.903 21:18:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:45.903 21:18:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.903 21:18:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.903 21:18:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.903 21:18:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.903 21:18:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.903 21:18:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.161 21:18:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:46.161 "name": "Existed_Raid", 00:21:46.161 "uuid": "681b24f9-aacf-46c4-be41-680484afbc21", 00:21:46.161 "strip_size_kb": 64, 00:21:46.161 "state": "configuring", 00:21:46.161 "raid_level": "raid5f", 00:21:46.161 "superblock": true, 00:21:46.161 "num_base_bdevs": 3, 00:21:46.161 "num_base_bdevs_discovered": 0, 00:21:46.161 "num_base_bdevs_operational": 3, 00:21:46.161 "base_bdevs_list": [ 00:21:46.161 { 00:21:46.161 "name": "BaseBdev1", 00:21:46.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.161 "is_configured": false, 00:21:46.161 "data_offset": 0, 00:21:46.161 "data_size": 0 00:21:46.161 }, 00:21:46.161 { 00:21:46.161 "name": "BaseBdev2", 00:21:46.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.161 "is_configured": false, 00:21:46.161 "data_offset": 0, 00:21:46.161 "data_size": 0 00:21:46.161 }, 00:21:46.161 { 00:21:46.161 "name": "BaseBdev3", 00:21:46.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.161 "is_configured": false, 00:21:46.161 "data_offset": 0, 00:21:46.161 "data_size": 0 00:21:46.161 } 00:21:46.161 ] 00:21:46.161 }' 00:21:46.161 21:18:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:46.161 21:18:08 -- common/autotest_common.sh@10 -- # set +x 00:21:46.726 21:18:09 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:46.983 [2024-06-07 21:18:09.467640] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:46.983 [2024-06-07 21:18:09.467883] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:21:46.983 21:18:09 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:47.241 [2024-06-07 21:18:09.727792] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:47.241 [2024-06-07 21:18:09.728106] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:47.241 [2024-06-07 21:18:09.728158] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:47.241 [2024-06-07 21:18:09.728204] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:47.241 [2024-06-07 21:18:09.728292] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:47.241 [2024-06-07 21:18:09.728364] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:47.241 21:18:09 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:47.499 [2024-06-07 21:18:10.011077] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:47.499 BaseBdev1 00:21:47.499 21:18:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:47.499 21:18:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:21:47.499 21:18:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:47.499 21:18:10 -- common/autotest_common.sh@889 -- # local i 00:21:47.499 21:18:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:47.499 21:18:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:47.499 21:18:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:47.757 21:18:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:47.757 [ 00:21:47.757 { 00:21:47.757 "name": "BaseBdev1", 00:21:47.757 "aliases": [ 00:21:47.757 "84a18a71-40d5-4e52-8532-3e981369d505" 00:21:47.757 ], 00:21:47.757 "product_name": "Malloc disk", 00:21:47.757 "block_size": 512, 00:21:47.757 "num_blocks": 65536, 00:21:47.757 "uuid": "84a18a71-40d5-4e52-8532-3e981369d505", 00:21:47.757 "assigned_rate_limits": { 00:21:47.757 "rw_ios_per_sec": 0, 00:21:47.757 "rw_mbytes_per_sec": 0, 00:21:47.757 "r_mbytes_per_sec": 0, 00:21:47.757 "w_mbytes_per_sec": 0 00:21:47.757 }, 00:21:47.757 "claimed": true, 00:21:47.757 "claim_type": "exclusive_write", 00:21:47.757 "zoned": false, 00:21:47.757 "supported_io_types": { 00:21:47.757 "read": true, 00:21:47.757 "write": true, 00:21:47.757 "unmap": true, 00:21:47.757 "write_zeroes": true, 00:21:47.757 "flush": true, 00:21:47.757 "reset": true, 00:21:47.757 "compare": false, 00:21:47.757 "compare_and_write": false, 00:21:47.757 "abort": true, 00:21:47.757 "nvme_admin": false, 00:21:47.757 "nvme_io": false 00:21:47.757 }, 00:21:47.757 "memory_domains": [ 00:21:47.757 { 00:21:47.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.757 "dma_device_type": 2 00:21:47.757 } 00:21:47.757 ], 00:21:47.757 "driver_specific": {} 00:21:47.757 } 00:21:47.757 ] 00:21:48.014 21:18:10 -- common/autotest_common.sh@895 -- # return 0 00:21:48.014 21:18:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:48.014 21:18:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:48.014 21:18:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:48.014 21:18:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:48.014 21:18:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:48.014 21:18:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:48.014 21:18:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:48.014 21:18:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:48.014 21:18:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:48.014 21:18:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:48.014 21:18:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.014 21:18:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.014 21:18:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:48.014 "name": "Existed_Raid", 00:21:48.014 "uuid": "e386aab1-81af-4f26-ab2c-f8812b3895a7", 00:21:48.014 "strip_size_kb": 64, 00:21:48.014 "state": "configuring", 00:21:48.014 "raid_level": "raid5f", 00:21:48.014 "superblock": true, 00:21:48.015 "num_base_bdevs": 3, 00:21:48.015 "num_base_bdevs_discovered": 1, 00:21:48.015 "num_base_bdevs_operational": 3, 00:21:48.015 "base_bdevs_list": [ 00:21:48.015 { 00:21:48.015 "name": "BaseBdev1", 00:21:48.015 "uuid": "84a18a71-40d5-4e52-8532-3e981369d505", 00:21:48.015 "is_configured": true, 00:21:48.015 "data_offset": 2048, 00:21:48.015 "data_size": 63488 00:21:48.015 }, 00:21:48.015 { 00:21:48.015 "name": "BaseBdev2", 00:21:48.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.015 "is_configured": false, 00:21:48.015 "data_offset": 0, 00:21:48.015 "data_size": 0 00:21:48.015 }, 00:21:48.015 { 00:21:48.015 "name": "BaseBdev3", 00:21:48.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.015 "is_configured": false, 00:21:48.015 "data_offset": 0, 00:21:48.015 "data_size": 0 00:21:48.015 } 00:21:48.015 ] 00:21:48.015 }' 00:21:48.015 21:18:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:48.015 21:18:10 -- common/autotest_common.sh@10 -- # set +x 00:21:48.948 21:18:11 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:48.948 [2024-06-07 21:18:11.579519] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:48.948 [2024-06-07 21:18:11.579725] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:21:48.948 21:18:11 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:21:48.948 21:18:11 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:49.207 21:18:11 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:49.465 BaseBdev1 00:21:49.465 21:18:11 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:21:49.465 21:18:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:21:49.465 21:18:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:49.465 21:18:11 -- common/autotest_common.sh@889 -- # local i 00:21:49.465 21:18:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:49.465 21:18:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:49.465 21:18:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:49.723 21:18:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:49.981 [ 00:21:49.981 { 00:21:49.981 "name": "BaseBdev1", 00:21:49.981 "aliases": [ 00:21:49.981 "75f81224-3f1c-44cf-840a-fce60c6d6ff5" 00:21:49.981 ], 00:21:49.981 "product_name": "Malloc disk", 00:21:49.981 "block_size": 512, 00:21:49.981 "num_blocks": 65536, 00:21:49.981 "uuid": "75f81224-3f1c-44cf-840a-fce60c6d6ff5", 00:21:49.981 "assigned_rate_limits": { 00:21:49.981 "rw_ios_per_sec": 0, 00:21:49.981 "rw_mbytes_per_sec": 0, 00:21:49.981 "r_mbytes_per_sec": 0, 00:21:49.981 "w_mbytes_per_sec": 0 00:21:49.981 }, 00:21:49.981 "claimed": false, 00:21:49.981 "zoned": false, 00:21:49.981 "supported_io_types": { 00:21:49.981 "read": true, 00:21:49.981 "write": true, 00:21:49.981 "unmap": true, 00:21:49.981 "write_zeroes": true, 00:21:49.981 "flush": true, 00:21:49.981 "reset": true, 00:21:49.981 "compare": false, 00:21:49.981 "compare_and_write": false, 00:21:49.981 "abort": true, 00:21:49.981 "nvme_admin": false, 00:21:49.981 "nvme_io": false 00:21:49.981 }, 00:21:49.981 "memory_domains": [ 00:21:49.981 { 00:21:49.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.981 "dma_device_type": 2 00:21:49.981 } 00:21:49.981 ], 00:21:49.981 "driver_specific": {} 00:21:49.981 } 00:21:49.981 ] 00:21:49.981 21:18:12 -- common/autotest_common.sh@895 -- # return 0 00:21:49.981 21:18:12 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:49.981 [2024-06-07 21:18:12.628830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:49.981 [2024-06-07 21:18:12.631018] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:49.981 [2024-06-07 21:18:12.631232] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:49.981 [2024-06-07 21:18:12.631412] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:49.981 [2024-06-07 21:18:12.631479] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:49.981 21:18:12 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:49.981 21:18:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:49.981 21:18:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:49.981 21:18:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:49.981 21:18:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:49.981 21:18:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:49.981 21:18:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:49.981 21:18:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:49.981 21:18:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:49.981 21:18:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:49.981 21:18:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:49.981 21:18:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:49.981 21:18:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.981 21:18:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.239 21:18:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:50.239 "name": "Existed_Raid", 00:21:50.239 "uuid": "a5d7de72-1e7a-423b-b3e3-33d97a655b5a", 00:21:50.239 "strip_size_kb": 64, 00:21:50.239 "state": "configuring", 00:21:50.239 "raid_level": "raid5f", 00:21:50.239 "superblock": true, 00:21:50.239 "num_base_bdevs": 3, 00:21:50.239 "num_base_bdevs_discovered": 1, 00:21:50.239 "num_base_bdevs_operational": 3, 00:21:50.239 "base_bdevs_list": [ 00:21:50.239 { 00:21:50.239 "name": "BaseBdev1", 00:21:50.239 "uuid": "75f81224-3f1c-44cf-840a-fce60c6d6ff5", 00:21:50.239 "is_configured": true, 00:21:50.239 "data_offset": 2048, 00:21:50.239 "data_size": 63488 00:21:50.239 }, 00:21:50.239 { 00:21:50.239 "name": "BaseBdev2", 00:21:50.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.239 "is_configured": false, 00:21:50.239 "data_offset": 0, 00:21:50.239 "data_size": 0 00:21:50.239 }, 00:21:50.239 { 00:21:50.239 "name": "BaseBdev3", 00:21:50.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.239 "is_configured": false, 00:21:50.239 "data_offset": 0, 00:21:50.239 "data_size": 0 00:21:50.239 } 00:21:50.239 ] 00:21:50.239 }' 00:21:50.239 21:18:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:50.239 21:18:12 -- common/autotest_common.sh@10 -- # set +x 00:21:51.191 21:18:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:51.191 [2024-06-07 21:18:13.776827] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:51.191 BaseBdev2 00:21:51.191 21:18:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:51.191 21:18:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:21:51.191 21:18:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:51.191 21:18:13 -- common/autotest_common.sh@889 -- # local i 00:21:51.191 21:18:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:51.191 21:18:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:51.191 21:18:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:51.448 21:18:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:51.706 [ 00:21:51.706 { 00:21:51.706 "name": "BaseBdev2", 00:21:51.706 "aliases": [ 00:21:51.706 "ee36f1cd-0747-4c81-9be6-b42203c75434" 00:21:51.706 ], 00:21:51.706 "product_name": "Malloc disk", 00:21:51.706 "block_size": 512, 00:21:51.706 "num_blocks": 65536, 00:21:51.706 "uuid": "ee36f1cd-0747-4c81-9be6-b42203c75434", 00:21:51.706 "assigned_rate_limits": { 00:21:51.706 "rw_ios_per_sec": 0, 00:21:51.706 "rw_mbytes_per_sec": 0, 00:21:51.706 "r_mbytes_per_sec": 0, 00:21:51.706 "w_mbytes_per_sec": 0 00:21:51.706 }, 00:21:51.706 "claimed": true, 00:21:51.706 "claim_type": "exclusive_write", 00:21:51.706 "zoned": false, 00:21:51.706 "supported_io_types": { 00:21:51.706 "read": true, 00:21:51.706 "write": true, 00:21:51.706 "unmap": true, 00:21:51.706 "write_zeroes": true, 00:21:51.706 "flush": true, 00:21:51.706 "reset": true, 00:21:51.706 "compare": false, 00:21:51.706 "compare_and_write": false, 00:21:51.706 "abort": true, 00:21:51.706 "nvme_admin": false, 00:21:51.706 "nvme_io": false 00:21:51.706 }, 00:21:51.706 "memory_domains": [ 00:21:51.706 { 00:21:51.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.706 "dma_device_type": 2 00:21:51.706 } 00:21:51.706 ], 00:21:51.706 "driver_specific": {} 00:21:51.706 } 00:21:51.706 ] 00:21:51.706 21:18:14 -- common/autotest_common.sh@895 -- # return 0 00:21:51.706 21:18:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:51.706 21:18:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:51.706 21:18:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:51.706 21:18:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:51.706 21:18:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:51.706 21:18:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:51.706 21:18:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:51.706 21:18:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:51.706 21:18:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:51.706 21:18:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:51.706 21:18:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:51.706 21:18:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:51.706 21:18:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.706 21:18:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.964 21:18:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:51.964 "name": "Existed_Raid", 00:21:51.964 "uuid": "a5d7de72-1e7a-423b-b3e3-33d97a655b5a", 00:21:51.964 "strip_size_kb": 64, 00:21:51.964 "state": "configuring", 00:21:51.964 "raid_level": "raid5f", 00:21:51.964 "superblock": true, 00:21:51.964 "num_base_bdevs": 3, 00:21:51.964 "num_base_bdevs_discovered": 2, 00:21:51.964 "num_base_bdevs_operational": 3, 00:21:51.964 "base_bdevs_list": [ 00:21:51.964 { 00:21:51.964 "name": "BaseBdev1", 00:21:51.964 "uuid": "75f81224-3f1c-44cf-840a-fce60c6d6ff5", 00:21:51.964 "is_configured": true, 00:21:51.964 "data_offset": 2048, 00:21:51.964 "data_size": 63488 00:21:51.964 }, 00:21:51.964 { 00:21:51.964 "name": "BaseBdev2", 00:21:51.964 "uuid": "ee36f1cd-0747-4c81-9be6-b42203c75434", 00:21:51.965 "is_configured": true, 00:21:51.965 "data_offset": 2048, 00:21:51.965 "data_size": 63488 00:21:51.965 }, 00:21:51.965 { 00:21:51.965 "name": "BaseBdev3", 00:21:51.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.965 "is_configured": false, 00:21:51.965 "data_offset": 0, 00:21:51.965 "data_size": 0 00:21:51.965 } 00:21:51.965 ] 00:21:51.965 }' 00:21:51.965 21:18:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:51.965 21:18:14 -- common/autotest_common.sh@10 -- # set +x 00:21:52.532 21:18:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:52.800 [2024-06-07 21:18:15.394603] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:52.800 [2024-06-07 21:18:15.394868] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:21:52.800 [2024-06-07 21:18:15.394884] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:52.800 BaseBdev3 00:21:52.800 [2024-06-07 21:18:15.395077] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:21:52.800 [2024-06-07 21:18:15.395982] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:21:52.800 [2024-06-07 21:18:15.396007] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:21:52.800 [2024-06-07 21:18:15.396179] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.800 21:18:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:52.800 21:18:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:21:52.800 21:18:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:52.800 21:18:15 -- common/autotest_common.sh@889 -- # local i 00:21:52.800 21:18:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:52.800 21:18:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:52.800 21:18:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:53.075 21:18:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:53.333 [ 00:21:53.333 { 00:21:53.333 "name": "BaseBdev3", 00:21:53.333 "aliases": [ 00:21:53.333 "52ec8712-4166-4d60-b597-5acadb206afc" 00:21:53.333 ], 00:21:53.333 "product_name": "Malloc disk", 00:21:53.333 "block_size": 512, 00:21:53.333 "num_blocks": 65536, 00:21:53.333 "uuid": "52ec8712-4166-4d60-b597-5acadb206afc", 00:21:53.333 "assigned_rate_limits": { 00:21:53.333 "rw_ios_per_sec": 0, 00:21:53.333 "rw_mbytes_per_sec": 0, 00:21:53.333 "r_mbytes_per_sec": 0, 00:21:53.333 "w_mbytes_per_sec": 0 00:21:53.333 }, 00:21:53.333 "claimed": true, 00:21:53.333 "claim_type": "exclusive_write", 00:21:53.333 "zoned": false, 00:21:53.333 "supported_io_types": { 00:21:53.333 "read": true, 00:21:53.333 "write": true, 00:21:53.333 "unmap": true, 00:21:53.333 "write_zeroes": true, 00:21:53.333 "flush": true, 00:21:53.333 "reset": true, 00:21:53.333 "compare": false, 00:21:53.333 "compare_and_write": false, 00:21:53.333 "abort": true, 00:21:53.333 "nvme_admin": false, 00:21:53.333 "nvme_io": false 00:21:53.333 }, 00:21:53.333 "memory_domains": [ 00:21:53.333 { 00:21:53.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.333 "dma_device_type": 2 00:21:53.333 } 00:21:53.333 ], 00:21:53.333 "driver_specific": {} 00:21:53.333 } 00:21:53.333 ] 00:21:53.333 21:18:15 -- common/autotest_common.sh@895 -- # return 0 00:21:53.333 21:18:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:53.333 21:18:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:53.333 21:18:15 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:53.333 21:18:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:53.333 21:18:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:53.333 21:18:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:53.333 21:18:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:53.333 21:18:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:53.333 21:18:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:53.333 21:18:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:53.333 21:18:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:53.333 21:18:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:53.333 21:18:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.333 21:18:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.592 21:18:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:53.592 "name": "Existed_Raid", 00:21:53.592 "uuid": "a5d7de72-1e7a-423b-b3e3-33d97a655b5a", 00:21:53.592 "strip_size_kb": 64, 00:21:53.592 "state": "online", 00:21:53.592 "raid_level": "raid5f", 00:21:53.592 "superblock": true, 00:21:53.592 "num_base_bdevs": 3, 00:21:53.592 "num_base_bdevs_discovered": 3, 00:21:53.592 "num_base_bdevs_operational": 3, 00:21:53.592 "base_bdevs_list": [ 00:21:53.592 { 00:21:53.592 "name": "BaseBdev1", 00:21:53.592 "uuid": "75f81224-3f1c-44cf-840a-fce60c6d6ff5", 00:21:53.592 "is_configured": true, 00:21:53.592 "data_offset": 2048, 00:21:53.592 "data_size": 63488 00:21:53.592 }, 00:21:53.592 { 00:21:53.592 "name": "BaseBdev2", 00:21:53.592 "uuid": "ee36f1cd-0747-4c81-9be6-b42203c75434", 00:21:53.592 "is_configured": true, 00:21:53.592 "data_offset": 2048, 00:21:53.592 "data_size": 63488 00:21:53.592 }, 00:21:53.592 { 00:21:53.592 "name": "BaseBdev3", 00:21:53.592 "uuid": "52ec8712-4166-4d60-b597-5acadb206afc", 00:21:53.592 "is_configured": true, 00:21:53.592 "data_offset": 2048, 00:21:53.592 "data_size": 63488 00:21:53.592 } 00:21:53.592 ] 00:21:53.592 }' 00:21:53.592 21:18:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:53.592 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:21:54.159 21:18:16 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:54.418 [2024-06-07 21:18:16.991141] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@196 -- # return 0 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.418 21:18:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.676 21:18:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:54.676 "name": "Existed_Raid", 00:21:54.676 "uuid": "a5d7de72-1e7a-423b-b3e3-33d97a655b5a", 00:21:54.676 "strip_size_kb": 64, 00:21:54.676 "state": "online", 00:21:54.676 "raid_level": "raid5f", 00:21:54.676 "superblock": true, 00:21:54.676 "num_base_bdevs": 3, 00:21:54.676 "num_base_bdevs_discovered": 2, 00:21:54.676 "num_base_bdevs_operational": 2, 00:21:54.676 "base_bdevs_list": [ 00:21:54.676 { 00:21:54.676 "name": null, 00:21:54.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.676 "is_configured": false, 00:21:54.676 "data_offset": 2048, 00:21:54.676 "data_size": 63488 00:21:54.676 }, 00:21:54.676 { 00:21:54.676 "name": "BaseBdev2", 00:21:54.676 "uuid": "ee36f1cd-0747-4c81-9be6-b42203c75434", 00:21:54.676 "is_configured": true, 00:21:54.676 "data_offset": 2048, 00:21:54.676 "data_size": 63488 00:21:54.676 }, 00:21:54.676 { 00:21:54.676 "name": "BaseBdev3", 00:21:54.676 "uuid": "52ec8712-4166-4d60-b597-5acadb206afc", 00:21:54.676 "is_configured": true, 00:21:54.676 "data_offset": 2048, 00:21:54.676 "data_size": 63488 00:21:54.676 } 00:21:54.676 ] 00:21:54.676 }' 00:21:54.676 21:18:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:54.677 21:18:17 -- common/autotest_common.sh@10 -- # set +x 00:21:55.611 21:18:17 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:55.611 21:18:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:55.611 21:18:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.611 21:18:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:55.611 21:18:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:55.611 21:18:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:55.611 21:18:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:55.870 [2024-06-07 21:18:18.460853] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:55.870 [2024-06-07 21:18:18.460916] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:55.870 [2024-06-07 21:18:18.460993] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:55.870 21:18:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:55.870 21:18:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:55.870 21:18:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.870 21:18:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:56.128 21:18:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:56.128 21:18:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:56.128 21:18:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:56.386 [2024-06-07 21:18:18.966113] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:56.386 [2024-06-07 21:18:18.966216] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:21:56.386 21:18:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:56.386 21:18:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:56.386 21:18:18 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:56.386 21:18:18 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.645 21:18:19 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:56.645 21:18:19 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:56.645 21:18:19 -- bdev/bdev_raid.sh@287 -- # killprocess 141110 00:21:56.645 21:18:19 -- common/autotest_common.sh@926 -- # '[' -z 141110 ']' 00:21:56.645 21:18:19 -- common/autotest_common.sh@930 -- # kill -0 141110 00:21:56.645 21:18:19 -- common/autotest_common.sh@931 -- # uname 00:21:56.645 21:18:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:56.645 21:18:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141110 00:21:56.645 21:18:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:56.645 21:18:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:56.645 21:18:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141110' 00:21:56.645 killing process with pid 141110 00:21:56.645 21:18:19 -- common/autotest_common.sh@945 -- # kill 141110 00:21:56.645 21:18:19 -- common/autotest_common.sh@950 -- # wait 141110 00:21:56.645 [2024-06-07 21:18:19.262733] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:56.645 [2024-06-07 21:18:19.262819] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:56.904 21:18:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:56.904 00:21:56.904 real 0m12.235s 00:21:56.904 user 0m22.745s 00:21:56.904 sys 0m1.394s 00:21:56.904 21:18:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.904 21:18:19 -- common/autotest_common.sh@10 -- # set +x 00:21:56.904 ************************************ 00:21:56.904 END TEST raid5f_state_function_test_sb 00:21:56.904 ************************************ 00:21:56.904 21:18:19 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:21:56.904 21:18:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:21:56.904 21:18:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:56.904 21:18:19 -- common/autotest_common.sh@10 -- # set +x 00:21:56.904 ************************************ 00:21:56.904 START TEST raid5f_superblock_test 00:21:56.904 ************************************ 00:21:56.904 21:18:19 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 3 00:21:56.904 21:18:19 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@357 -- # raid_pid=141510 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@358 -- # waitforlisten 141510 /var/tmp/spdk-raid.sock 00:21:56.905 21:18:19 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:56.905 21:18:19 -- common/autotest_common.sh@819 -- # '[' -z 141510 ']' 00:21:56.905 21:18:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:56.905 21:18:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:56.905 21:18:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:56.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:56.905 21:18:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:56.905 21:18:19 -- common/autotest_common.sh@10 -- # set +x 00:21:57.163 [2024-06-07 21:18:19.614257] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:57.163 [2024-06-07 21:18:19.614449] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141510 ] 00:21:57.163 [2024-06-07 21:18:19.779496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.421 [2024-06-07 21:18:19.875483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.421 [2024-06-07 21:18:19.932940] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:57.987 21:18:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:57.987 21:18:20 -- common/autotest_common.sh@852 -- # return 0 00:21:57.987 21:18:20 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:21:57.987 21:18:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:57.987 21:18:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:21:57.987 21:18:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:21:57.987 21:18:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:57.987 21:18:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:57.987 21:18:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:57.987 21:18:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:57.987 21:18:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:58.245 malloc1 00:21:58.245 21:18:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:58.502 [2024-06-07 21:18:20.959754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:58.502 [2024-06-07 21:18:20.959874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.502 [2024-06-07 21:18:20.959919] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:58.502 [2024-06-07 21:18:20.959971] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.502 [2024-06-07 21:18:20.962523] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.502 [2024-06-07 21:18:20.962584] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:58.502 pt1 00:21:58.502 21:18:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:58.502 21:18:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:58.502 21:18:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:21:58.502 21:18:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:21:58.502 21:18:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:58.503 21:18:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:58.503 21:18:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:58.503 21:18:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:58.503 21:18:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:58.760 malloc2 00:21:58.760 21:18:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:59.017 [2024-06-07 21:18:21.438889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:59.017 [2024-06-07 21:18:21.438989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.017 [2024-06-07 21:18:21.439035] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:59.017 [2024-06-07 21:18:21.439091] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.017 [2024-06-07 21:18:21.441684] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.017 [2024-06-07 21:18:21.441744] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:59.017 pt2 00:21:59.017 21:18:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:59.017 21:18:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:59.017 21:18:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:21:59.017 21:18:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:21:59.017 21:18:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:59.017 21:18:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:59.017 21:18:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:59.017 21:18:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:59.017 21:18:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:59.017 malloc3 00:21:59.017 21:18:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:59.274 [2024-06-07 21:18:21.865007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:59.274 [2024-06-07 21:18:21.865135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.274 [2024-06-07 21:18:21.865183] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:59.275 [2024-06-07 21:18:21.865278] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.275 [2024-06-07 21:18:21.867551] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.275 [2024-06-07 21:18:21.867604] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:59.275 pt3 00:21:59.275 21:18:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:59.275 21:18:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:59.275 21:18:21 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:59.532 [2024-06-07 21:18:22.065128] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:59.533 [2024-06-07 21:18:22.067111] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:59.533 [2024-06-07 21:18:22.067200] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:59.533 [2024-06-07 21:18:22.067472] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:21:59.533 [2024-06-07 21:18:22.067498] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:59.533 [2024-06-07 21:18:22.067669] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:59.533 [2024-06-07 21:18:22.068479] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:21:59.533 [2024-06-07 21:18:22.068502] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:21:59.533 [2024-06-07 21:18:22.068676] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.533 21:18:22 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:59.533 21:18:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:59.533 21:18:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:59.533 21:18:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:59.533 21:18:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:59.533 21:18:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:59.533 21:18:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:59.533 21:18:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:59.533 21:18:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:59.533 21:18:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:59.533 21:18:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.533 21:18:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.790 21:18:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:59.790 "name": "raid_bdev1", 00:21:59.790 "uuid": "29b9b0e5-6be9-400b-9059-e2770dc4c111", 00:21:59.790 "strip_size_kb": 64, 00:21:59.791 "state": "online", 00:21:59.791 "raid_level": "raid5f", 00:21:59.791 "superblock": true, 00:21:59.791 "num_base_bdevs": 3, 00:21:59.791 "num_base_bdevs_discovered": 3, 00:21:59.791 "num_base_bdevs_operational": 3, 00:21:59.791 "base_bdevs_list": [ 00:21:59.791 { 00:21:59.791 "name": "pt1", 00:21:59.791 "uuid": "a1e2d9eb-6414-5999-884b-12b63e3a1186", 00:21:59.791 "is_configured": true, 00:21:59.791 "data_offset": 2048, 00:21:59.791 "data_size": 63488 00:21:59.791 }, 00:21:59.791 { 00:21:59.791 "name": "pt2", 00:21:59.791 "uuid": "c5300749-769a-5e7a-98b2-9141268d60ff", 00:21:59.791 "is_configured": true, 00:21:59.791 "data_offset": 2048, 00:21:59.791 "data_size": 63488 00:21:59.791 }, 00:21:59.791 { 00:21:59.791 "name": "pt3", 00:21:59.791 "uuid": "fbc537d3-0374-5bbb-87a4-b4a4efd4045d", 00:21:59.791 "is_configured": true, 00:21:59.791 "data_offset": 2048, 00:21:59.791 "data_size": 63488 00:21:59.791 } 00:21:59.791 ] 00:21:59.791 }' 00:21:59.791 21:18:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:59.791 21:18:22 -- common/autotest_common.sh@10 -- # set +x 00:22:00.356 21:18:22 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:00.356 21:18:22 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:22:00.613 [2024-06-07 21:18:23.174880] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:00.613 21:18:23 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=29b9b0e5-6be9-400b-9059-e2770dc4c111 00:22:00.613 21:18:23 -- bdev/bdev_raid.sh@380 -- # '[' -z 29b9b0e5-6be9-400b-9059-e2770dc4c111 ']' 00:22:00.613 21:18:23 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:00.870 [2024-06-07 21:18:23.414760] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:00.870 [2024-06-07 21:18:23.414790] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:00.870 [2024-06-07 21:18:23.414947] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:00.870 [2024-06-07 21:18:23.415059] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:00.870 [2024-06-07 21:18:23.415078] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:22:00.870 21:18:23 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.870 21:18:23 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:22:01.128 21:18:23 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:22:01.128 21:18:23 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:22:01.128 21:18:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:01.128 21:18:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:01.386 21:18:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:01.386 21:18:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:01.644 21:18:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:01.644 21:18:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:01.644 21:18:24 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:01.644 21:18:24 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:01.903 21:18:24 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:22:01.903 21:18:24 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:01.903 21:18:24 -- common/autotest_common.sh@640 -- # local es=0 00:22:01.903 21:18:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:01.903 21:18:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:01.903 21:18:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:01.903 21:18:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:01.903 21:18:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:01.903 21:18:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:01.903 21:18:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:01.903 21:18:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:01.903 21:18:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:01.903 21:18:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:02.161 [2024-06-07 21:18:24.683012] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:02.161 [2024-06-07 21:18:24.685227] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:02.161 [2024-06-07 21:18:24.685315] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:02.161 [2024-06-07 21:18:24.685368] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:22:02.161 [2024-06-07 21:18:24.685447] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:22:02.161 [2024-06-07 21:18:24.685498] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:22:02.161 [2024-06-07 21:18:24.685564] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:02.161 [2024-06-07 21:18:24.685577] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:22:02.161 request: 00:22:02.161 { 00:22:02.161 "name": "raid_bdev1", 00:22:02.161 "raid_level": "raid5f", 00:22:02.161 "base_bdevs": [ 00:22:02.161 "malloc1", 00:22:02.161 "malloc2", 00:22:02.161 "malloc3" 00:22:02.161 ], 00:22:02.161 "superblock": false, 00:22:02.161 "strip_size_kb": 64, 00:22:02.161 "method": "bdev_raid_create", 00:22:02.161 "req_id": 1 00:22:02.161 } 00:22:02.161 Got JSON-RPC error response 00:22:02.161 response: 00:22:02.161 { 00:22:02.161 "code": -17, 00:22:02.161 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:02.161 } 00:22:02.161 21:18:24 -- common/autotest_common.sh@643 -- # es=1 00:22:02.161 21:18:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:02.161 21:18:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:02.161 21:18:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:02.161 21:18:24 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.161 21:18:24 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:22:02.419 21:18:24 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:22:02.419 21:18:24 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:22:02.419 21:18:24 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:02.419 [2024-06-07 21:18:25.087001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:02.419 [2024-06-07 21:18:25.087096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.419 [2024-06-07 21:18:25.087136] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:02.419 [2024-06-07 21:18:25.087173] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.419 [2024-06-07 21:18:25.089636] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.419 [2024-06-07 21:18:25.089695] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:02.419 [2024-06-07 21:18:25.089793] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:02.419 [2024-06-07 21:18:25.089852] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:02.419 pt1 00:22:02.676 21:18:25 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:02.676 21:18:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:02.676 21:18:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:02.676 21:18:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:02.676 21:18:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:02.676 21:18:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:02.676 21:18:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:02.676 21:18:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:02.676 21:18:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:02.676 21:18:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:02.676 21:18:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.676 21:18:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.676 21:18:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:02.676 "name": "raid_bdev1", 00:22:02.676 "uuid": "29b9b0e5-6be9-400b-9059-e2770dc4c111", 00:22:02.676 "strip_size_kb": 64, 00:22:02.676 "state": "configuring", 00:22:02.676 "raid_level": "raid5f", 00:22:02.676 "superblock": true, 00:22:02.676 "num_base_bdevs": 3, 00:22:02.676 "num_base_bdevs_discovered": 1, 00:22:02.676 "num_base_bdevs_operational": 3, 00:22:02.676 "base_bdevs_list": [ 00:22:02.676 { 00:22:02.676 "name": "pt1", 00:22:02.676 "uuid": "a1e2d9eb-6414-5999-884b-12b63e3a1186", 00:22:02.676 "is_configured": true, 00:22:02.676 "data_offset": 2048, 00:22:02.676 "data_size": 63488 00:22:02.676 }, 00:22:02.676 { 00:22:02.676 "name": null, 00:22:02.676 "uuid": "c5300749-769a-5e7a-98b2-9141268d60ff", 00:22:02.676 "is_configured": false, 00:22:02.676 "data_offset": 2048, 00:22:02.676 "data_size": 63488 00:22:02.676 }, 00:22:02.676 { 00:22:02.676 "name": null, 00:22:02.676 "uuid": "fbc537d3-0374-5bbb-87a4-b4a4efd4045d", 00:22:02.676 "is_configured": false, 00:22:02.676 "data_offset": 2048, 00:22:02.676 "data_size": 63488 00:22:02.676 } 00:22:02.676 ] 00:22:02.676 }' 00:22:02.676 21:18:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:02.676 21:18:25 -- common/autotest_common.sh@10 -- # set +x 00:22:03.614 21:18:25 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:22:03.614 21:18:25 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:03.614 [2024-06-07 21:18:26.175278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:03.614 [2024-06-07 21:18:26.175378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.614 [2024-06-07 21:18:26.175463] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:03.614 [2024-06-07 21:18:26.175506] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.614 [2024-06-07 21:18:26.176034] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.614 [2024-06-07 21:18:26.176066] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:03.614 [2024-06-07 21:18:26.176172] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:03.614 [2024-06-07 21:18:26.176203] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:03.614 pt2 00:22:03.614 21:18:26 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:03.872 [2024-06-07 21:18:26.363271] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:03.872 21:18:26 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:03.872 21:18:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:03.872 21:18:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:03.872 21:18:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:03.872 21:18:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:03.872 21:18:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:03.872 21:18:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:03.872 21:18:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:03.872 21:18:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:03.872 21:18:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:03.872 21:18:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.872 21:18:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.130 21:18:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:04.130 "name": "raid_bdev1", 00:22:04.130 "uuid": "29b9b0e5-6be9-400b-9059-e2770dc4c111", 00:22:04.130 "strip_size_kb": 64, 00:22:04.130 "state": "configuring", 00:22:04.130 "raid_level": "raid5f", 00:22:04.130 "superblock": true, 00:22:04.130 "num_base_bdevs": 3, 00:22:04.130 "num_base_bdevs_discovered": 1, 00:22:04.130 "num_base_bdevs_operational": 3, 00:22:04.130 "base_bdevs_list": [ 00:22:04.130 { 00:22:04.130 "name": "pt1", 00:22:04.130 "uuid": "a1e2d9eb-6414-5999-884b-12b63e3a1186", 00:22:04.130 "is_configured": true, 00:22:04.130 "data_offset": 2048, 00:22:04.130 "data_size": 63488 00:22:04.130 }, 00:22:04.130 { 00:22:04.130 "name": null, 00:22:04.130 "uuid": "c5300749-769a-5e7a-98b2-9141268d60ff", 00:22:04.130 "is_configured": false, 00:22:04.130 "data_offset": 2048, 00:22:04.130 "data_size": 63488 00:22:04.130 }, 00:22:04.130 { 00:22:04.130 "name": null, 00:22:04.130 "uuid": "fbc537d3-0374-5bbb-87a4-b4a4efd4045d", 00:22:04.130 "is_configured": false, 00:22:04.130 "data_offset": 2048, 00:22:04.130 "data_size": 63488 00:22:04.130 } 00:22:04.130 ] 00:22:04.130 }' 00:22:04.130 21:18:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:04.130 21:18:26 -- common/autotest_common.sh@10 -- # set +x 00:22:04.698 21:18:27 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:22:04.698 21:18:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:04.698 21:18:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:04.956 [2024-06-07 21:18:27.443536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:04.956 [2024-06-07 21:18:27.443645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:04.956 [2024-06-07 21:18:27.443686] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:04.956 [2024-06-07 21:18:27.443715] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.956 [2024-06-07 21:18:27.444195] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.956 [2024-06-07 21:18:27.444230] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:04.956 [2024-06-07 21:18:27.444322] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:04.956 [2024-06-07 21:18:27.444350] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:04.956 pt2 00:22:04.956 21:18:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:04.956 21:18:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:04.956 21:18:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:05.213 [2024-06-07 21:18:27.639556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:05.213 [2024-06-07 21:18:27.639647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.213 [2024-06-07 21:18:27.639683] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:05.213 [2024-06-07 21:18:27.639711] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.213 [2024-06-07 21:18:27.640179] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.213 [2024-06-07 21:18:27.640213] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:05.213 [2024-06-07 21:18:27.640300] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:05.213 [2024-06-07 21:18:27.640326] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:05.213 [2024-06-07 21:18:27.640458] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:22:05.213 [2024-06-07 21:18:27.640472] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:05.213 [2024-06-07 21:18:27.640551] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:05.213 [2024-06-07 21:18:27.641237] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:22:05.213 [2024-06-07 21:18:27.641262] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:22:05.213 [2024-06-07 21:18:27.641392] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.213 pt3 00:22:05.213 21:18:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:05.213 21:18:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:05.213 21:18:27 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:05.213 21:18:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:05.213 21:18:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:05.213 21:18:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:05.213 21:18:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:05.213 21:18:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:05.213 21:18:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:05.213 21:18:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:05.213 21:18:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:05.213 21:18:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:05.213 21:18:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.213 21:18:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.471 21:18:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:05.471 "name": "raid_bdev1", 00:22:05.471 "uuid": "29b9b0e5-6be9-400b-9059-e2770dc4c111", 00:22:05.471 "strip_size_kb": 64, 00:22:05.471 "state": "online", 00:22:05.471 "raid_level": "raid5f", 00:22:05.471 "superblock": true, 00:22:05.471 "num_base_bdevs": 3, 00:22:05.471 "num_base_bdevs_discovered": 3, 00:22:05.471 "num_base_bdevs_operational": 3, 00:22:05.471 "base_bdevs_list": [ 00:22:05.471 { 00:22:05.471 "name": "pt1", 00:22:05.471 "uuid": "a1e2d9eb-6414-5999-884b-12b63e3a1186", 00:22:05.471 "is_configured": true, 00:22:05.471 "data_offset": 2048, 00:22:05.471 "data_size": 63488 00:22:05.471 }, 00:22:05.471 { 00:22:05.471 "name": "pt2", 00:22:05.471 "uuid": "c5300749-769a-5e7a-98b2-9141268d60ff", 00:22:05.471 "is_configured": true, 00:22:05.471 "data_offset": 2048, 00:22:05.471 "data_size": 63488 00:22:05.471 }, 00:22:05.471 { 00:22:05.471 "name": "pt3", 00:22:05.471 "uuid": "fbc537d3-0374-5bbb-87a4-b4a4efd4045d", 00:22:05.471 "is_configured": true, 00:22:05.471 "data_offset": 2048, 00:22:05.471 "data_size": 63488 00:22:05.471 } 00:22:05.471 ] 00:22:05.471 }' 00:22:05.471 21:18:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:05.471 21:18:27 -- common/autotest_common.sh@10 -- # set +x 00:22:06.039 21:18:28 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:06.039 21:18:28 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:22:06.298 [2024-06-07 21:18:28.800010] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:06.298 21:18:28 -- bdev/bdev_raid.sh@430 -- # '[' 29b9b0e5-6be9-400b-9059-e2770dc4c111 '!=' 29b9b0e5-6be9-400b-9059-e2770dc4c111 ']' 00:22:06.298 21:18:28 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:22:06.298 21:18:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:06.298 21:18:28 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:06.298 21:18:28 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:06.557 [2024-06-07 21:18:28.995979] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:06.557 21:18:29 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:06.557 21:18:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:06.557 21:18:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:06.557 21:18:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:06.557 21:18:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:06.557 21:18:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:06.557 21:18:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:06.557 21:18:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:06.557 21:18:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:06.557 21:18:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:06.557 21:18:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.557 21:18:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.815 21:18:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:06.815 "name": "raid_bdev1", 00:22:06.815 "uuid": "29b9b0e5-6be9-400b-9059-e2770dc4c111", 00:22:06.815 "strip_size_kb": 64, 00:22:06.815 "state": "online", 00:22:06.815 "raid_level": "raid5f", 00:22:06.815 "superblock": true, 00:22:06.815 "num_base_bdevs": 3, 00:22:06.815 "num_base_bdevs_discovered": 2, 00:22:06.815 "num_base_bdevs_operational": 2, 00:22:06.815 "base_bdevs_list": [ 00:22:06.815 { 00:22:06.815 "name": null, 00:22:06.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.815 "is_configured": false, 00:22:06.815 "data_offset": 2048, 00:22:06.815 "data_size": 63488 00:22:06.815 }, 00:22:06.815 { 00:22:06.815 "name": "pt2", 00:22:06.815 "uuid": "c5300749-769a-5e7a-98b2-9141268d60ff", 00:22:06.815 "is_configured": true, 00:22:06.815 "data_offset": 2048, 00:22:06.815 "data_size": 63488 00:22:06.815 }, 00:22:06.815 { 00:22:06.815 "name": "pt3", 00:22:06.815 "uuid": "fbc537d3-0374-5bbb-87a4-b4a4efd4045d", 00:22:06.815 "is_configured": true, 00:22:06.815 "data_offset": 2048, 00:22:06.815 "data_size": 63488 00:22:06.815 } 00:22:06.815 ] 00:22:06.815 }' 00:22:06.815 21:18:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:06.815 21:18:29 -- common/autotest_common.sh@10 -- # set +x 00:22:07.382 21:18:29 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:07.640 [2024-06-07 21:18:30.216247] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:07.640 [2024-06-07 21:18:30.216286] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:07.640 [2024-06-07 21:18:30.216373] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:07.640 [2024-06-07 21:18:30.216436] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:07.640 [2024-06-07 21:18:30.216448] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:22:07.640 21:18:30 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.640 21:18:30 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:22:07.899 21:18:30 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:22:07.899 21:18:30 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:22:07.899 21:18:30 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:22:07.899 21:18:30 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:07.899 21:18:30 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:08.157 21:18:30 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:08.157 21:18:30 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:08.157 21:18:30 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:08.415 21:18:30 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:08.415 21:18:30 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:08.415 21:18:30 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:22:08.415 21:18:30 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:08.415 21:18:30 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:08.675 [2024-06-07 21:18:31.148412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:08.675 [2024-06-07 21:18:31.148489] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.675 [2024-06-07 21:18:31.148527] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:08.675 [2024-06-07 21:18:31.148551] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.675 [2024-06-07 21:18:31.150725] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.675 [2024-06-07 21:18:31.150765] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:08.675 [2024-06-07 21:18:31.150865] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:08.675 [2024-06-07 21:18:31.150913] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:08.675 pt2 00:22:08.675 21:18:31 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:08.675 21:18:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:08.675 21:18:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:08.675 21:18:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:08.675 21:18:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:08.675 21:18:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:08.675 21:18:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:08.675 21:18:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:08.675 21:18:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:08.675 21:18:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:08.675 21:18:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.675 21:18:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.933 21:18:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:08.933 "name": "raid_bdev1", 00:22:08.933 "uuid": "29b9b0e5-6be9-400b-9059-e2770dc4c111", 00:22:08.933 "strip_size_kb": 64, 00:22:08.933 "state": "configuring", 00:22:08.933 "raid_level": "raid5f", 00:22:08.933 "superblock": true, 00:22:08.933 "num_base_bdevs": 3, 00:22:08.933 "num_base_bdevs_discovered": 1, 00:22:08.933 "num_base_bdevs_operational": 2, 00:22:08.933 "base_bdevs_list": [ 00:22:08.933 { 00:22:08.933 "name": null, 00:22:08.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.933 "is_configured": false, 00:22:08.933 "data_offset": 2048, 00:22:08.933 "data_size": 63488 00:22:08.933 }, 00:22:08.933 { 00:22:08.933 "name": "pt2", 00:22:08.933 "uuid": "c5300749-769a-5e7a-98b2-9141268d60ff", 00:22:08.933 "is_configured": true, 00:22:08.933 "data_offset": 2048, 00:22:08.933 "data_size": 63488 00:22:08.933 }, 00:22:08.933 { 00:22:08.933 "name": null, 00:22:08.933 "uuid": "fbc537d3-0374-5bbb-87a4-b4a4efd4045d", 00:22:08.933 "is_configured": false, 00:22:08.933 "data_offset": 2048, 00:22:08.933 "data_size": 63488 00:22:08.933 } 00:22:08.933 ] 00:22:08.933 }' 00:22:08.933 21:18:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:08.933 21:18:31 -- common/autotest_common.sh@10 -- # set +x 00:22:09.499 21:18:32 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:09.499 21:18:32 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:09.499 21:18:32 -- bdev/bdev_raid.sh@462 -- # i=2 00:22:09.499 21:18:32 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:09.758 [2024-06-07 21:18:32.232670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:09.758 [2024-06-07 21:18:32.232771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:09.758 [2024-06-07 21:18:32.232813] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:09.758 [2024-06-07 21:18:32.232838] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:09.758 [2024-06-07 21:18:32.233365] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:09.758 [2024-06-07 21:18:32.233407] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:09.758 [2024-06-07 21:18:32.233573] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:09.758 [2024-06-07 21:18:32.233621] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:09.758 [2024-06-07 21:18:32.233750] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:22:09.758 [2024-06-07 21:18:32.233763] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:09.758 [2024-06-07 21:18:32.233845] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:09.758 [2024-06-07 21:18:32.234568] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:22:09.758 [2024-06-07 21:18:32.234593] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:22:09.758 [2024-06-07 21:18:32.234830] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.758 pt3 00:22:09.758 21:18:32 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:09.758 21:18:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:09.758 21:18:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:09.758 21:18:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:09.758 21:18:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:09.758 21:18:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:09.758 21:18:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:09.758 21:18:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:09.758 21:18:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:09.758 21:18:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:09.758 21:18:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.758 21:18:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.016 21:18:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:10.016 "name": "raid_bdev1", 00:22:10.016 "uuid": "29b9b0e5-6be9-400b-9059-e2770dc4c111", 00:22:10.016 "strip_size_kb": 64, 00:22:10.016 "state": "online", 00:22:10.016 "raid_level": "raid5f", 00:22:10.016 "superblock": true, 00:22:10.016 "num_base_bdevs": 3, 00:22:10.016 "num_base_bdevs_discovered": 2, 00:22:10.016 "num_base_bdevs_operational": 2, 00:22:10.016 "base_bdevs_list": [ 00:22:10.016 { 00:22:10.016 "name": null, 00:22:10.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.016 "is_configured": false, 00:22:10.016 "data_offset": 2048, 00:22:10.016 "data_size": 63488 00:22:10.016 }, 00:22:10.016 { 00:22:10.016 "name": "pt2", 00:22:10.016 "uuid": "c5300749-769a-5e7a-98b2-9141268d60ff", 00:22:10.016 "is_configured": true, 00:22:10.016 "data_offset": 2048, 00:22:10.016 "data_size": 63488 00:22:10.016 }, 00:22:10.016 { 00:22:10.016 "name": "pt3", 00:22:10.016 "uuid": "fbc537d3-0374-5bbb-87a4-b4a4efd4045d", 00:22:10.016 "is_configured": true, 00:22:10.016 "data_offset": 2048, 00:22:10.016 "data_size": 63488 00:22:10.016 } 00:22:10.016 ] 00:22:10.016 }' 00:22:10.016 21:18:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:10.016 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:22:10.588 21:18:33 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:22:10.588 21:18:33 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:10.863 [2024-06-07 21:18:33.300903] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:10.863 [2024-06-07 21:18:33.300947] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:10.863 [2024-06-07 21:18:33.301041] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:10.863 [2024-06-07 21:18:33.301106] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:10.863 [2024-06-07 21:18:33.301117] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:22:10.863 21:18:33 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.863 21:18:33 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:22:11.120 21:18:33 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:22:11.120 21:18:33 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:22:11.120 21:18:33 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:11.378 [2024-06-07 21:18:33.840961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:11.378 [2024-06-07 21:18:33.841067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.378 [2024-06-07 21:18:33.841168] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:11.378 [2024-06-07 21:18:33.841193] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.378 [2024-06-07 21:18:33.843481] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.378 [2024-06-07 21:18:33.843539] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:11.378 [2024-06-07 21:18:33.843643] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:11.378 [2024-06-07 21:18:33.843696] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:11.378 pt1 00:22:11.378 21:18:33 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:11.378 21:18:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:11.378 21:18:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:11.378 21:18:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:11.378 21:18:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:11.378 21:18:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:11.378 21:18:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:11.378 21:18:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:11.378 21:18:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:11.378 21:18:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:11.378 21:18:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.378 21:18:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.636 21:18:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:11.636 "name": "raid_bdev1", 00:22:11.636 "uuid": "29b9b0e5-6be9-400b-9059-e2770dc4c111", 00:22:11.636 "strip_size_kb": 64, 00:22:11.636 "state": "configuring", 00:22:11.636 "raid_level": "raid5f", 00:22:11.636 "superblock": true, 00:22:11.636 "num_base_bdevs": 3, 00:22:11.636 "num_base_bdevs_discovered": 1, 00:22:11.636 "num_base_bdevs_operational": 3, 00:22:11.636 "base_bdevs_list": [ 00:22:11.636 { 00:22:11.636 "name": "pt1", 00:22:11.636 "uuid": "a1e2d9eb-6414-5999-884b-12b63e3a1186", 00:22:11.636 "is_configured": true, 00:22:11.636 "data_offset": 2048, 00:22:11.636 "data_size": 63488 00:22:11.636 }, 00:22:11.636 { 00:22:11.636 "name": null, 00:22:11.636 "uuid": "c5300749-769a-5e7a-98b2-9141268d60ff", 00:22:11.636 "is_configured": false, 00:22:11.636 "data_offset": 2048, 00:22:11.636 "data_size": 63488 00:22:11.636 }, 00:22:11.636 { 00:22:11.636 "name": null, 00:22:11.636 "uuid": "fbc537d3-0374-5bbb-87a4-b4a4efd4045d", 00:22:11.636 "is_configured": false, 00:22:11.636 "data_offset": 2048, 00:22:11.636 "data_size": 63488 00:22:11.636 } 00:22:11.636 ] 00:22:11.636 }' 00:22:11.637 21:18:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:11.637 21:18:34 -- common/autotest_common.sh@10 -- # set +x 00:22:12.203 21:18:34 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:22:12.203 21:18:34 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:12.203 21:18:34 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:12.461 21:18:35 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:12.461 21:18:35 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:12.461 21:18:35 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:12.720 21:18:35 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:12.720 21:18:35 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:12.720 21:18:35 -- bdev/bdev_raid.sh@489 -- # i=2 00:22:12.720 21:18:35 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:12.979 [2024-06-07 21:18:35.593620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:12.979 [2024-06-07 21:18:35.593749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.979 [2024-06-07 21:18:35.593788] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:12.979 [2024-06-07 21:18:35.593832] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.979 [2024-06-07 21:18:35.594466] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.979 [2024-06-07 21:18:35.594535] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:12.979 [2024-06-07 21:18:35.594659] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:12.979 [2024-06-07 21:18:35.594691] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:12.979 [2024-06-07 21:18:35.594713] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:12.979 [2024-06-07 21:18:35.594783] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:22:12.979 [2024-06-07 21:18:35.594845] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:12.979 pt3 00:22:12.979 21:18:35 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:12.979 21:18:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:12.979 21:18:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:12.979 21:18:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:12.979 21:18:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:12.979 21:18:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:12.979 21:18:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:12.979 21:18:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:12.979 21:18:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:12.979 21:18:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:12.979 21:18:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.979 21:18:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.237 21:18:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:13.237 "name": "raid_bdev1", 00:22:13.237 "uuid": "29b9b0e5-6be9-400b-9059-e2770dc4c111", 00:22:13.237 "strip_size_kb": 64, 00:22:13.237 "state": "configuring", 00:22:13.237 "raid_level": "raid5f", 00:22:13.237 "superblock": true, 00:22:13.237 "num_base_bdevs": 3, 00:22:13.237 "num_base_bdevs_discovered": 1, 00:22:13.237 "num_base_bdevs_operational": 2, 00:22:13.237 "base_bdevs_list": [ 00:22:13.237 { 00:22:13.237 "name": null, 00:22:13.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.237 "is_configured": false, 00:22:13.237 "data_offset": 2048, 00:22:13.237 "data_size": 63488 00:22:13.237 }, 00:22:13.237 { 00:22:13.237 "name": null, 00:22:13.237 "uuid": "c5300749-769a-5e7a-98b2-9141268d60ff", 00:22:13.237 "is_configured": false, 00:22:13.237 "data_offset": 2048, 00:22:13.237 "data_size": 63488 00:22:13.237 }, 00:22:13.237 { 00:22:13.237 "name": "pt3", 00:22:13.237 "uuid": "fbc537d3-0374-5bbb-87a4-b4a4efd4045d", 00:22:13.237 "is_configured": true, 00:22:13.237 "data_offset": 2048, 00:22:13.237 "data_size": 63488 00:22:13.238 } 00:22:13.238 ] 00:22:13.238 }' 00:22:13.238 21:18:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:13.238 21:18:35 -- common/autotest_common.sh@10 -- # set +x 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:14.209 [2024-06-07 21:18:36.741987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:14.209 [2024-06-07 21:18:36.742100] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.209 [2024-06-07 21:18:36.742136] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:14.209 [2024-06-07 21:18:36.742162] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.209 [2024-06-07 21:18:36.742694] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.209 [2024-06-07 21:18:36.742759] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:14.209 [2024-06-07 21:18:36.742840] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:14.209 [2024-06-07 21:18:36.742887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:14.209 [2024-06-07 21:18:36.743055] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:22:14.209 [2024-06-07 21:18:36.743069] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:14.209 [2024-06-07 21:18:36.743151] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:14.209 [2024-06-07 21:18:36.743992] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:22:14.209 [2024-06-07 21:18:36.744017] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:22:14.209 [2024-06-07 21:18:36.744246] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.209 pt2 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.209 21:18:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.468 21:18:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:14.468 "name": "raid_bdev1", 00:22:14.468 "uuid": "29b9b0e5-6be9-400b-9059-e2770dc4c111", 00:22:14.468 "strip_size_kb": 64, 00:22:14.468 "state": "online", 00:22:14.468 "raid_level": "raid5f", 00:22:14.468 "superblock": true, 00:22:14.468 "num_base_bdevs": 3, 00:22:14.468 "num_base_bdevs_discovered": 2, 00:22:14.468 "num_base_bdevs_operational": 2, 00:22:14.468 "base_bdevs_list": [ 00:22:14.468 { 00:22:14.468 "name": null, 00:22:14.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.468 "is_configured": false, 00:22:14.468 "data_offset": 2048, 00:22:14.468 "data_size": 63488 00:22:14.468 }, 00:22:14.468 { 00:22:14.468 "name": "pt2", 00:22:14.468 "uuid": "c5300749-769a-5e7a-98b2-9141268d60ff", 00:22:14.468 "is_configured": true, 00:22:14.468 "data_offset": 2048, 00:22:14.468 "data_size": 63488 00:22:14.468 }, 00:22:14.468 { 00:22:14.468 "name": "pt3", 00:22:14.468 "uuid": "fbc537d3-0374-5bbb-87a4-b4a4efd4045d", 00:22:14.468 "is_configured": true, 00:22:14.468 "data_offset": 2048, 00:22:14.468 "data_size": 63488 00:22:14.468 } 00:22:14.468 ] 00:22:14.468 }' 00:22:14.468 21:18:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:14.468 21:18:37 -- common/autotest_common.sh@10 -- # set +x 00:22:15.033 21:18:37 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:15.033 21:18:37 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:22:15.291 [2024-06-07 21:18:37.906381] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.291 21:18:37 -- bdev/bdev_raid.sh@506 -- # '[' 29b9b0e5-6be9-400b-9059-e2770dc4c111 '!=' 29b9b0e5-6be9-400b-9059-e2770dc4c111 ']' 00:22:15.291 21:18:37 -- bdev/bdev_raid.sh@511 -- # killprocess 141510 00:22:15.291 21:18:37 -- common/autotest_common.sh@926 -- # '[' -z 141510 ']' 00:22:15.291 21:18:37 -- common/autotest_common.sh@930 -- # kill -0 141510 00:22:15.291 21:18:37 -- common/autotest_common.sh@931 -- # uname 00:22:15.291 21:18:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:15.291 21:18:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141510 00:22:15.291 killing process with pid 141510 00:22:15.291 21:18:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:15.291 21:18:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:15.291 21:18:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141510' 00:22:15.291 21:18:37 -- common/autotest_common.sh@945 -- # kill 141510 00:22:15.291 21:18:37 -- common/autotest_common.sh@950 -- # wait 141510 00:22:15.291 [2024-06-07 21:18:37.938620] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:15.291 [2024-06-07 21:18:37.938744] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.291 [2024-06-07 21:18:37.938817] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.291 [2024-06-07 21:18:37.938836] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:22:15.549 [2024-06-07 21:18:37.969698] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:15.549 ************************************ 00:22:15.549 END TEST raid5f_superblock_test 00:22:15.549 ************************************ 00:22:15.549 21:18:38 -- bdev/bdev_raid.sh@513 -- # return 0 00:22:15.549 00:22:15.549 real 0m18.636s 00:22:15.549 user 0m35.449s 00:22:15.549 sys 0m2.146s 00:22:15.549 21:18:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:15.549 21:18:38 -- common/autotest_common.sh@10 -- # set +x 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:22:15.807 21:18:38 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:15.807 21:18:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:15.807 21:18:38 -- common/autotest_common.sh@10 -- # set +x 00:22:15.807 ************************************ 00:22:15.807 START TEST raid5f_rebuild_test 00:22:15.807 ************************************ 00:22:15.807 21:18:38 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 false false 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:22:15.807 21:18:38 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:22:15.808 21:18:38 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:22:15.808 21:18:38 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:15.808 21:18:38 -- bdev/bdev_raid.sh@544 -- # raid_pid=142148 00:22:15.808 21:18:38 -- bdev/bdev_raid.sh@545 -- # waitforlisten 142148 /var/tmp/spdk-raid.sock 00:22:15.808 21:18:38 -- common/autotest_common.sh@819 -- # '[' -z 142148 ']' 00:22:15.808 21:18:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:15.808 21:18:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:15.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:15.808 21:18:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:15.808 21:18:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:15.808 21:18:38 -- common/autotest_common.sh@10 -- # set +x 00:22:15.808 21:18:38 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:15.808 [2024-06-07 21:18:38.312520] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:15.808 [2024-06-07 21:18:38.312965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142148 ] 00:22:15.808 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:15.808 Zero copy mechanism will not be used. 00:22:15.808 [2024-06-07 21:18:38.477266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.066 [2024-06-07 21:18:38.560408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.066 [2024-06-07 21:18:38.618140] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.633 21:18:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:16.633 21:18:39 -- common/autotest_common.sh@852 -- # return 0 00:22:16.633 21:18:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:16.633 21:18:39 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:16.634 21:18:39 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:16.892 BaseBdev1 00:22:16.892 21:18:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:16.892 21:18:39 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:16.892 21:18:39 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:17.151 BaseBdev2 00:22:17.151 21:18:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:17.151 21:18:39 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:17.151 21:18:39 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:17.410 BaseBdev3 00:22:17.410 21:18:39 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:17.668 spare_malloc 00:22:17.668 21:18:40 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:17.668 spare_delay 00:22:17.668 21:18:40 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:17.927 [2024-06-07 21:18:40.533583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:17.927 [2024-06-07 21:18:40.533711] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.927 [2024-06-07 21:18:40.533755] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:17.927 [2024-06-07 21:18:40.533803] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.927 [2024-06-07 21:18:40.536504] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.927 [2024-06-07 21:18:40.536573] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:17.927 spare 00:22:17.927 21:18:40 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:22:18.185 [2024-06-07 21:18:40.733707] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:18.185 [2024-06-07 21:18:40.735889] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:18.185 [2024-06-07 21:18:40.735976] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:18.185 [2024-06-07 21:18:40.736089] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:22:18.185 [2024-06-07 21:18:40.736103] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:18.185 [2024-06-07 21:18:40.736338] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:22:18.185 [2024-06-07 21:18:40.737201] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:22:18.185 [2024-06-07 21:18:40.737225] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:22:18.185 [2024-06-07 21:18:40.737445] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.185 21:18:40 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:18.185 21:18:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:18.185 21:18:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:18.185 21:18:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:18.185 21:18:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:18.185 21:18:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:18.185 21:18:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:18.185 21:18:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:18.186 21:18:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:18.186 21:18:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:18.186 21:18:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.186 21:18:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.444 21:18:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:18.444 "name": "raid_bdev1", 00:22:18.444 "uuid": "f40186d3-3ba4-4a70-ad27-3a77fa345b00", 00:22:18.444 "strip_size_kb": 64, 00:22:18.444 "state": "online", 00:22:18.444 "raid_level": "raid5f", 00:22:18.444 "superblock": false, 00:22:18.444 "num_base_bdevs": 3, 00:22:18.444 "num_base_bdevs_discovered": 3, 00:22:18.444 "num_base_bdevs_operational": 3, 00:22:18.444 "base_bdevs_list": [ 00:22:18.444 { 00:22:18.444 "name": "BaseBdev1", 00:22:18.444 "uuid": "d6a96521-8e4d-4d95-a13b-2f1163fc5a4a", 00:22:18.444 "is_configured": true, 00:22:18.444 "data_offset": 0, 00:22:18.444 "data_size": 65536 00:22:18.444 }, 00:22:18.444 { 00:22:18.444 "name": "BaseBdev2", 00:22:18.444 "uuid": "0a811710-846c-4649-a92f-5e989d0ff9b0", 00:22:18.444 "is_configured": true, 00:22:18.444 "data_offset": 0, 00:22:18.444 "data_size": 65536 00:22:18.444 }, 00:22:18.444 { 00:22:18.444 "name": "BaseBdev3", 00:22:18.444 "uuid": "b1ab3e44-dfb8-4e93-a1c1-629712a5ede9", 00:22:18.444 "is_configured": true, 00:22:18.444 "data_offset": 0, 00:22:18.444 "data_size": 65536 00:22:18.444 } 00:22:18.444 ] 00:22:18.444 }' 00:22:18.444 21:18:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:18.444 21:18:40 -- common/autotest_common.sh@10 -- # set +x 00:22:19.011 21:18:41 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:19.011 21:18:41 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:19.269 [2024-06-07 21:18:41.866239] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.269 21:18:41 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:22:19.269 21:18:41 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.269 21:18:41 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:19.528 21:18:42 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:19.528 21:18:42 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:19.528 21:18:42 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:19.528 21:18:42 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:19.528 21:18:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:19.528 21:18:42 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:19.528 21:18:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:19.528 21:18:42 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:19.528 21:18:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:19.528 21:18:42 -- bdev/nbd_common.sh@12 -- # local i 00:22:19.528 21:18:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:19.528 21:18:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:19.528 21:18:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:19.787 [2024-06-07 21:18:42.310278] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:19.787 /dev/nbd0 00:22:19.787 21:18:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:19.787 21:18:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:19.787 21:18:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:19.787 21:18:42 -- common/autotest_common.sh@857 -- # local i 00:22:19.787 21:18:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:19.787 21:18:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:19.787 21:18:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:19.787 21:18:42 -- common/autotest_common.sh@861 -- # break 00:22:19.787 21:18:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:19.787 21:18:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:19.787 21:18:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:19.787 1+0 records in 00:22:19.787 1+0 records out 00:22:19.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327063 s, 12.5 MB/s 00:22:19.787 21:18:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:19.787 21:18:42 -- common/autotest_common.sh@874 -- # size=4096 00:22:19.787 21:18:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:19.787 21:18:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:19.787 21:18:42 -- common/autotest_common.sh@877 -- # return 0 00:22:19.787 21:18:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:19.787 21:18:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:19.787 21:18:42 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:22:19.787 21:18:42 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:22:19.787 21:18:42 -- bdev/bdev_raid.sh@582 -- # echo 128 00:22:19.787 21:18:42 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:22:20.045 512+0 records in 00:22:20.045 512+0 records out 00:22:20.045 67108864 bytes (67 MB, 64 MiB) copied, 0.313575 s, 214 MB/s 00:22:20.045 21:18:42 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:20.045 21:18:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:20.045 21:18:42 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:20.045 21:18:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:20.045 21:18:42 -- bdev/nbd_common.sh@51 -- # local i 00:22:20.045 21:18:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:20.045 21:18:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:20.302 21:18:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:20.302 21:18:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:20.302 21:18:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:20.302 21:18:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:20.302 21:18:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:20.302 21:18:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:20.302 21:18:42 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:20.302 [2024-06-07 21:18:42.910307] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.562 21:18:43 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:20.562 21:18:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:20.562 21:18:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:20.562 21:18:43 -- bdev/nbd_common.sh@41 -- # break 00:22:20.562 21:18:43 -- bdev/nbd_common.sh@45 -- # return 0 00:22:20.562 21:18:43 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:20.562 [2024-06-07 21:18:43.194455] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:20.562 21:18:43 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:20.562 21:18:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:20.562 21:18:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:20.563 21:18:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:20.563 21:18:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:20.563 21:18:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:20.563 21:18:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:20.563 21:18:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:20.563 21:18:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:20.563 21:18:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:20.563 21:18:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.563 21:18:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.821 21:18:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:20.821 "name": "raid_bdev1", 00:22:20.821 "uuid": "f40186d3-3ba4-4a70-ad27-3a77fa345b00", 00:22:20.821 "strip_size_kb": 64, 00:22:20.821 "state": "online", 00:22:20.821 "raid_level": "raid5f", 00:22:20.821 "superblock": false, 00:22:20.821 "num_base_bdevs": 3, 00:22:20.821 "num_base_bdevs_discovered": 2, 00:22:20.821 "num_base_bdevs_operational": 2, 00:22:20.821 "base_bdevs_list": [ 00:22:20.821 { 00:22:20.821 "name": null, 00:22:20.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.821 "is_configured": false, 00:22:20.821 "data_offset": 0, 00:22:20.821 "data_size": 65536 00:22:20.821 }, 00:22:20.821 { 00:22:20.821 "name": "BaseBdev2", 00:22:20.821 "uuid": "0a811710-846c-4649-a92f-5e989d0ff9b0", 00:22:20.821 "is_configured": true, 00:22:20.821 "data_offset": 0, 00:22:20.821 "data_size": 65536 00:22:20.821 }, 00:22:20.821 { 00:22:20.821 "name": "BaseBdev3", 00:22:20.821 "uuid": "b1ab3e44-dfb8-4e93-a1c1-629712a5ede9", 00:22:20.821 "is_configured": true, 00:22:20.821 "data_offset": 0, 00:22:20.821 "data_size": 65536 00:22:20.821 } 00:22:20.821 ] 00:22:20.821 }' 00:22:20.821 21:18:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:20.821 21:18:43 -- common/autotest_common.sh@10 -- # set +x 00:22:21.753 21:18:44 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:21.753 [2024-06-07 21:18:44.314691] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:21.753 [2024-06-07 21:18:44.314768] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:21.753 [2024-06-07 21:18:44.319958] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cfb0 00:22:21.753 [2024-06-07 21:18:44.322441] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:21.753 21:18:44 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:22.686 21:18:45 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:22.686 21:18:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:22.687 21:18:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:22.687 21:18:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:22.687 21:18:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:22.687 21:18:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.687 21:18:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.944 21:18:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:22.944 "name": "raid_bdev1", 00:22:22.944 "uuid": "f40186d3-3ba4-4a70-ad27-3a77fa345b00", 00:22:22.944 "strip_size_kb": 64, 00:22:22.944 "state": "online", 00:22:22.944 "raid_level": "raid5f", 00:22:22.944 "superblock": false, 00:22:22.944 "num_base_bdevs": 3, 00:22:22.944 "num_base_bdevs_discovered": 3, 00:22:22.944 "num_base_bdevs_operational": 3, 00:22:22.944 "process": { 00:22:22.944 "type": "rebuild", 00:22:22.944 "target": "spare", 00:22:22.944 "progress": { 00:22:22.944 "blocks": 24576, 00:22:22.944 "percent": 18 00:22:22.944 } 00:22:22.944 }, 00:22:22.944 "base_bdevs_list": [ 00:22:22.944 { 00:22:22.944 "name": "spare", 00:22:22.944 "uuid": "deb60df3-7a17-5174-8d2c-d73a13a2410e", 00:22:22.944 "is_configured": true, 00:22:22.944 "data_offset": 0, 00:22:22.944 "data_size": 65536 00:22:22.944 }, 00:22:22.944 { 00:22:22.944 "name": "BaseBdev2", 00:22:22.944 "uuid": "0a811710-846c-4649-a92f-5e989d0ff9b0", 00:22:22.944 "is_configured": true, 00:22:22.944 "data_offset": 0, 00:22:22.944 "data_size": 65536 00:22:22.944 }, 00:22:22.944 { 00:22:22.944 "name": "BaseBdev3", 00:22:22.944 "uuid": "b1ab3e44-dfb8-4e93-a1c1-629712a5ede9", 00:22:22.944 "is_configured": true, 00:22:22.944 "data_offset": 0, 00:22:22.944 "data_size": 65536 00:22:22.944 } 00:22:22.944 ] 00:22:22.944 }' 00:22:22.944 21:18:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:23.202 21:18:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:23.202 21:18:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:23.202 21:18:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:23.202 21:18:45 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:23.202 [2024-06-07 21:18:45.864741] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:23.461 [2024-06-07 21:18:45.936162] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:23.461 [2024-06-07 21:18:45.936271] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.461 21:18:45 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:23.461 21:18:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:23.461 21:18:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:23.461 21:18:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:23.461 21:18:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:23.461 21:18:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:23.461 21:18:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:23.461 21:18:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:23.461 21:18:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:23.461 21:18:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:23.461 21:18:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.461 21:18:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.719 21:18:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:23.719 "name": "raid_bdev1", 00:22:23.719 "uuid": "f40186d3-3ba4-4a70-ad27-3a77fa345b00", 00:22:23.719 "strip_size_kb": 64, 00:22:23.719 "state": "online", 00:22:23.719 "raid_level": "raid5f", 00:22:23.719 "superblock": false, 00:22:23.719 "num_base_bdevs": 3, 00:22:23.719 "num_base_bdevs_discovered": 2, 00:22:23.719 "num_base_bdevs_operational": 2, 00:22:23.719 "base_bdevs_list": [ 00:22:23.719 { 00:22:23.719 "name": null, 00:22:23.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.719 "is_configured": false, 00:22:23.719 "data_offset": 0, 00:22:23.719 "data_size": 65536 00:22:23.719 }, 00:22:23.719 { 00:22:23.719 "name": "BaseBdev2", 00:22:23.719 "uuid": "0a811710-846c-4649-a92f-5e989d0ff9b0", 00:22:23.719 "is_configured": true, 00:22:23.719 "data_offset": 0, 00:22:23.719 "data_size": 65536 00:22:23.719 }, 00:22:23.719 { 00:22:23.719 "name": "BaseBdev3", 00:22:23.719 "uuid": "b1ab3e44-dfb8-4e93-a1c1-629712a5ede9", 00:22:23.719 "is_configured": true, 00:22:23.719 "data_offset": 0, 00:22:23.719 "data_size": 65536 00:22:23.719 } 00:22:23.719 ] 00:22:23.719 }' 00:22:23.719 21:18:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:23.719 21:18:46 -- common/autotest_common.sh@10 -- # set +x 00:22:24.284 21:18:46 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:24.284 21:18:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:24.284 21:18:46 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:24.284 21:18:46 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:24.284 21:18:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:24.284 21:18:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.284 21:18:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.542 21:18:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:24.542 "name": "raid_bdev1", 00:22:24.542 "uuid": "f40186d3-3ba4-4a70-ad27-3a77fa345b00", 00:22:24.542 "strip_size_kb": 64, 00:22:24.542 "state": "online", 00:22:24.542 "raid_level": "raid5f", 00:22:24.542 "superblock": false, 00:22:24.542 "num_base_bdevs": 3, 00:22:24.542 "num_base_bdevs_discovered": 2, 00:22:24.542 "num_base_bdevs_operational": 2, 00:22:24.542 "base_bdevs_list": [ 00:22:24.542 { 00:22:24.542 "name": null, 00:22:24.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.542 "is_configured": false, 00:22:24.542 "data_offset": 0, 00:22:24.542 "data_size": 65536 00:22:24.542 }, 00:22:24.542 { 00:22:24.542 "name": "BaseBdev2", 00:22:24.542 "uuid": "0a811710-846c-4649-a92f-5e989d0ff9b0", 00:22:24.542 "is_configured": true, 00:22:24.542 "data_offset": 0, 00:22:24.542 "data_size": 65536 00:22:24.542 }, 00:22:24.542 { 00:22:24.543 "name": "BaseBdev3", 00:22:24.543 "uuid": "b1ab3e44-dfb8-4e93-a1c1-629712a5ede9", 00:22:24.543 "is_configured": true, 00:22:24.543 "data_offset": 0, 00:22:24.543 "data_size": 65536 00:22:24.543 } 00:22:24.543 ] 00:22:24.543 }' 00:22:24.543 21:18:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:24.543 21:18:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:24.543 21:18:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:24.543 21:18:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:24.543 21:18:47 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:24.808 [2024-06-07 21:18:47.447114] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:24.808 [2024-06-07 21:18:47.447177] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:24.808 [2024-06-07 21:18:47.452295] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d150 00:22:24.808 [2024-06-07 21:18:47.454688] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:24.808 21:18:47 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:26.199 21:18:48 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.199 21:18:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:26.199 21:18:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:26.199 21:18:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:26.199 21:18:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:26.199 21:18:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.199 21:18:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.199 21:18:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:26.199 "name": "raid_bdev1", 00:22:26.199 "uuid": "f40186d3-3ba4-4a70-ad27-3a77fa345b00", 00:22:26.199 "strip_size_kb": 64, 00:22:26.199 "state": "online", 00:22:26.199 "raid_level": "raid5f", 00:22:26.199 "superblock": false, 00:22:26.199 "num_base_bdevs": 3, 00:22:26.199 "num_base_bdevs_discovered": 3, 00:22:26.199 "num_base_bdevs_operational": 3, 00:22:26.199 "process": { 00:22:26.199 "type": "rebuild", 00:22:26.199 "target": "spare", 00:22:26.199 "progress": { 00:22:26.199 "blocks": 24576, 00:22:26.199 "percent": 18 00:22:26.199 } 00:22:26.199 }, 00:22:26.199 "base_bdevs_list": [ 00:22:26.199 { 00:22:26.199 "name": "spare", 00:22:26.199 "uuid": "deb60df3-7a17-5174-8d2c-d73a13a2410e", 00:22:26.199 "is_configured": true, 00:22:26.199 "data_offset": 0, 00:22:26.200 "data_size": 65536 00:22:26.200 }, 00:22:26.200 { 00:22:26.200 "name": "BaseBdev2", 00:22:26.200 "uuid": "0a811710-846c-4649-a92f-5e989d0ff9b0", 00:22:26.200 "is_configured": true, 00:22:26.200 "data_offset": 0, 00:22:26.200 "data_size": 65536 00:22:26.200 }, 00:22:26.200 { 00:22:26.200 "name": "BaseBdev3", 00:22:26.200 "uuid": "b1ab3e44-dfb8-4e93-a1c1-629712a5ede9", 00:22:26.200 "is_configured": true, 00:22:26.200 "data_offset": 0, 00:22:26.200 "data_size": 65536 00:22:26.200 } 00:22:26.200 ] 00:22:26.200 }' 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@657 -- # local timeout=588 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.200 21:18:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.458 21:18:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:26.458 "name": "raid_bdev1", 00:22:26.458 "uuid": "f40186d3-3ba4-4a70-ad27-3a77fa345b00", 00:22:26.458 "strip_size_kb": 64, 00:22:26.458 "state": "online", 00:22:26.458 "raid_level": "raid5f", 00:22:26.458 "superblock": false, 00:22:26.458 "num_base_bdevs": 3, 00:22:26.458 "num_base_bdevs_discovered": 3, 00:22:26.458 "num_base_bdevs_operational": 3, 00:22:26.458 "process": { 00:22:26.458 "type": "rebuild", 00:22:26.458 "target": "spare", 00:22:26.458 "progress": { 00:22:26.458 "blocks": 30720, 00:22:26.458 "percent": 23 00:22:26.458 } 00:22:26.458 }, 00:22:26.458 "base_bdevs_list": [ 00:22:26.458 { 00:22:26.458 "name": "spare", 00:22:26.458 "uuid": "deb60df3-7a17-5174-8d2c-d73a13a2410e", 00:22:26.458 "is_configured": true, 00:22:26.458 "data_offset": 0, 00:22:26.458 "data_size": 65536 00:22:26.458 }, 00:22:26.458 { 00:22:26.458 "name": "BaseBdev2", 00:22:26.458 "uuid": "0a811710-846c-4649-a92f-5e989d0ff9b0", 00:22:26.458 "is_configured": true, 00:22:26.458 "data_offset": 0, 00:22:26.458 "data_size": 65536 00:22:26.458 }, 00:22:26.458 { 00:22:26.458 "name": "BaseBdev3", 00:22:26.458 "uuid": "b1ab3e44-dfb8-4e93-a1c1-629712a5ede9", 00:22:26.458 "is_configured": true, 00:22:26.458 "data_offset": 0, 00:22:26.458 "data_size": 65536 00:22:26.458 } 00:22:26.458 ] 00:22:26.458 }' 00:22:26.458 21:18:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:26.458 21:18:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.458 21:18:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:26.717 21:18:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.717 21:18:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:27.653 21:18:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:27.653 21:18:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:27.653 21:18:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:27.653 21:18:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:27.653 21:18:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:27.653 21:18:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:27.653 21:18:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.653 21:18:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.910 21:18:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:27.910 "name": "raid_bdev1", 00:22:27.910 "uuid": "f40186d3-3ba4-4a70-ad27-3a77fa345b00", 00:22:27.910 "strip_size_kb": 64, 00:22:27.910 "state": "online", 00:22:27.910 "raid_level": "raid5f", 00:22:27.910 "superblock": false, 00:22:27.910 "num_base_bdevs": 3, 00:22:27.910 "num_base_bdevs_discovered": 3, 00:22:27.910 "num_base_bdevs_operational": 3, 00:22:27.910 "process": { 00:22:27.910 "type": "rebuild", 00:22:27.910 "target": "spare", 00:22:27.910 "progress": { 00:22:27.910 "blocks": 59392, 00:22:27.910 "percent": 45 00:22:27.910 } 00:22:27.910 }, 00:22:27.910 "base_bdevs_list": [ 00:22:27.910 { 00:22:27.910 "name": "spare", 00:22:27.910 "uuid": "deb60df3-7a17-5174-8d2c-d73a13a2410e", 00:22:27.910 "is_configured": true, 00:22:27.910 "data_offset": 0, 00:22:27.910 "data_size": 65536 00:22:27.910 }, 00:22:27.910 { 00:22:27.910 "name": "BaseBdev2", 00:22:27.910 "uuid": "0a811710-846c-4649-a92f-5e989d0ff9b0", 00:22:27.910 "is_configured": true, 00:22:27.910 "data_offset": 0, 00:22:27.910 "data_size": 65536 00:22:27.910 }, 00:22:27.910 { 00:22:27.910 "name": "BaseBdev3", 00:22:27.910 "uuid": "b1ab3e44-dfb8-4e93-a1c1-629712a5ede9", 00:22:27.910 "is_configured": true, 00:22:27.910 "data_offset": 0, 00:22:27.911 "data_size": 65536 00:22:27.911 } 00:22:27.911 ] 00:22:27.911 }' 00:22:27.911 21:18:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:27.911 21:18:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:27.911 21:18:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:27.911 21:18:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:27.911 21:18:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:29.284 21:18:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:29.284 21:18:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.284 21:18:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:29.284 21:18:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:29.284 21:18:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:29.284 21:18:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:29.284 21:18:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.284 21:18:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.284 21:18:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:29.284 "name": "raid_bdev1", 00:22:29.284 "uuid": "f40186d3-3ba4-4a70-ad27-3a77fa345b00", 00:22:29.284 "strip_size_kb": 64, 00:22:29.284 "state": "online", 00:22:29.284 "raid_level": "raid5f", 00:22:29.284 "superblock": false, 00:22:29.284 "num_base_bdevs": 3, 00:22:29.284 "num_base_bdevs_discovered": 3, 00:22:29.284 "num_base_bdevs_operational": 3, 00:22:29.284 "process": { 00:22:29.284 "type": "rebuild", 00:22:29.284 "target": "spare", 00:22:29.284 "progress": { 00:22:29.284 "blocks": 88064, 00:22:29.284 "percent": 67 00:22:29.284 } 00:22:29.284 }, 00:22:29.284 "base_bdevs_list": [ 00:22:29.284 { 00:22:29.284 "name": "spare", 00:22:29.284 "uuid": "deb60df3-7a17-5174-8d2c-d73a13a2410e", 00:22:29.284 "is_configured": true, 00:22:29.284 "data_offset": 0, 00:22:29.284 "data_size": 65536 00:22:29.284 }, 00:22:29.284 { 00:22:29.284 "name": "BaseBdev2", 00:22:29.284 "uuid": "0a811710-846c-4649-a92f-5e989d0ff9b0", 00:22:29.284 "is_configured": true, 00:22:29.284 "data_offset": 0, 00:22:29.284 "data_size": 65536 00:22:29.284 }, 00:22:29.284 { 00:22:29.284 "name": "BaseBdev3", 00:22:29.284 "uuid": "b1ab3e44-dfb8-4e93-a1c1-629712a5ede9", 00:22:29.284 "is_configured": true, 00:22:29.284 "data_offset": 0, 00:22:29.284 "data_size": 65536 00:22:29.284 } 00:22:29.284 ] 00:22:29.284 }' 00:22:29.284 21:18:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:29.284 21:18:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:29.284 21:18:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:29.284 21:18:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:29.284 21:18:51 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:30.662 21:18:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:30.662 21:18:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.662 21:18:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:30.662 21:18:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:30.662 21:18:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:30.662 21:18:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:30.662 21:18:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.662 21:18:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.662 21:18:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:30.662 "name": "raid_bdev1", 00:22:30.662 "uuid": "f40186d3-3ba4-4a70-ad27-3a77fa345b00", 00:22:30.662 "strip_size_kb": 64, 00:22:30.662 "state": "online", 00:22:30.662 "raid_level": "raid5f", 00:22:30.662 "superblock": false, 00:22:30.662 "num_base_bdevs": 3, 00:22:30.662 "num_base_bdevs_discovered": 3, 00:22:30.662 "num_base_bdevs_operational": 3, 00:22:30.662 "process": { 00:22:30.662 "type": "rebuild", 00:22:30.662 "target": "spare", 00:22:30.662 "progress": { 00:22:30.662 "blocks": 114688, 00:22:30.662 "percent": 87 00:22:30.662 } 00:22:30.662 }, 00:22:30.662 "base_bdevs_list": [ 00:22:30.662 { 00:22:30.662 "name": "spare", 00:22:30.662 "uuid": "deb60df3-7a17-5174-8d2c-d73a13a2410e", 00:22:30.662 "is_configured": true, 00:22:30.662 "data_offset": 0, 00:22:30.662 "data_size": 65536 00:22:30.662 }, 00:22:30.662 { 00:22:30.662 "name": "BaseBdev2", 00:22:30.662 "uuid": "0a811710-846c-4649-a92f-5e989d0ff9b0", 00:22:30.662 "is_configured": true, 00:22:30.662 "data_offset": 0, 00:22:30.662 "data_size": 65536 00:22:30.662 }, 00:22:30.662 { 00:22:30.662 "name": "BaseBdev3", 00:22:30.662 "uuid": "b1ab3e44-dfb8-4e93-a1c1-629712a5ede9", 00:22:30.662 "is_configured": true, 00:22:30.662 "data_offset": 0, 00:22:30.662 "data_size": 65536 00:22:30.662 } 00:22:30.662 ] 00:22:30.662 }' 00:22:30.662 21:18:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:30.662 21:18:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:30.662 21:18:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:30.662 21:18:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.662 21:18:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:31.597 [2024-06-07 21:18:53.907866] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:31.597 [2024-06-07 21:18:53.907978] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:31.597 [2024-06-07 21:18:53.908063] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:31.855 21:18:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:31.855 21:18:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.855 21:18:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:31.855 21:18:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:31.855 21:18:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:31.855 21:18:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:31.855 21:18:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.855 21:18:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.855 21:18:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:31.855 "name": "raid_bdev1", 00:22:31.855 "uuid": "f40186d3-3ba4-4a70-ad27-3a77fa345b00", 00:22:31.855 "strip_size_kb": 64, 00:22:31.855 "state": "online", 00:22:31.855 "raid_level": "raid5f", 00:22:31.855 "superblock": false, 00:22:31.855 "num_base_bdevs": 3, 00:22:31.855 "num_base_bdevs_discovered": 3, 00:22:31.855 "num_base_bdevs_operational": 3, 00:22:31.855 "base_bdevs_list": [ 00:22:31.855 { 00:22:31.855 "name": "spare", 00:22:31.855 "uuid": "deb60df3-7a17-5174-8d2c-d73a13a2410e", 00:22:31.855 "is_configured": true, 00:22:31.855 "data_offset": 0, 00:22:31.855 "data_size": 65536 00:22:31.855 }, 00:22:31.855 { 00:22:31.855 "name": "BaseBdev2", 00:22:31.855 "uuid": "0a811710-846c-4649-a92f-5e989d0ff9b0", 00:22:31.855 "is_configured": true, 00:22:31.855 "data_offset": 0, 00:22:31.855 "data_size": 65536 00:22:31.855 }, 00:22:31.855 { 00:22:31.855 "name": "BaseBdev3", 00:22:31.855 "uuid": "b1ab3e44-dfb8-4e93-a1c1-629712a5ede9", 00:22:31.855 "is_configured": true, 00:22:31.855 "data_offset": 0, 00:22:31.855 "data_size": 65536 00:22:31.855 } 00:22:31.855 ] 00:22:31.855 }' 00:22:31.855 21:18:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:31.855 21:18:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:31.855 21:18:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:32.113 21:18:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:32.113 21:18:54 -- bdev/bdev_raid.sh@660 -- # break 00:22:32.113 21:18:54 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:32.113 21:18:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:32.113 21:18:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:32.113 21:18:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:32.113 21:18:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:32.113 21:18:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.113 21:18:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:32.371 "name": "raid_bdev1", 00:22:32.371 "uuid": "f40186d3-3ba4-4a70-ad27-3a77fa345b00", 00:22:32.371 "strip_size_kb": 64, 00:22:32.371 "state": "online", 00:22:32.371 "raid_level": "raid5f", 00:22:32.371 "superblock": false, 00:22:32.371 "num_base_bdevs": 3, 00:22:32.371 "num_base_bdevs_discovered": 3, 00:22:32.371 "num_base_bdevs_operational": 3, 00:22:32.371 "base_bdevs_list": [ 00:22:32.371 { 00:22:32.371 "name": "spare", 00:22:32.371 "uuid": "deb60df3-7a17-5174-8d2c-d73a13a2410e", 00:22:32.371 "is_configured": true, 00:22:32.371 "data_offset": 0, 00:22:32.371 "data_size": 65536 00:22:32.371 }, 00:22:32.371 { 00:22:32.371 "name": "BaseBdev2", 00:22:32.371 "uuid": "0a811710-846c-4649-a92f-5e989d0ff9b0", 00:22:32.371 "is_configured": true, 00:22:32.371 "data_offset": 0, 00:22:32.371 "data_size": 65536 00:22:32.371 }, 00:22:32.371 { 00:22:32.371 "name": "BaseBdev3", 00:22:32.371 "uuid": "b1ab3e44-dfb8-4e93-a1c1-629712a5ede9", 00:22:32.371 "is_configured": true, 00:22:32.371 "data_offset": 0, 00:22:32.371 "data_size": 65536 00:22:32.371 } 00:22:32.371 ] 00:22:32.371 }' 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.371 21:18:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.630 21:18:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:32.630 "name": "raid_bdev1", 00:22:32.630 "uuid": "f40186d3-3ba4-4a70-ad27-3a77fa345b00", 00:22:32.630 "strip_size_kb": 64, 00:22:32.630 "state": "online", 00:22:32.630 "raid_level": "raid5f", 00:22:32.630 "superblock": false, 00:22:32.630 "num_base_bdevs": 3, 00:22:32.630 "num_base_bdevs_discovered": 3, 00:22:32.630 "num_base_bdevs_operational": 3, 00:22:32.630 "base_bdevs_list": [ 00:22:32.630 { 00:22:32.630 "name": "spare", 00:22:32.630 "uuid": "deb60df3-7a17-5174-8d2c-d73a13a2410e", 00:22:32.630 "is_configured": true, 00:22:32.630 "data_offset": 0, 00:22:32.630 "data_size": 65536 00:22:32.630 }, 00:22:32.630 { 00:22:32.630 "name": "BaseBdev2", 00:22:32.630 "uuid": "0a811710-846c-4649-a92f-5e989d0ff9b0", 00:22:32.630 "is_configured": true, 00:22:32.630 "data_offset": 0, 00:22:32.630 "data_size": 65536 00:22:32.630 }, 00:22:32.630 { 00:22:32.630 "name": "BaseBdev3", 00:22:32.630 "uuid": "b1ab3e44-dfb8-4e93-a1c1-629712a5ede9", 00:22:32.630 "is_configured": true, 00:22:32.630 "data_offset": 0, 00:22:32.630 "data_size": 65536 00:22:32.630 } 00:22:32.630 ] 00:22:32.630 }' 00:22:32.630 21:18:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:32.630 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:22:33.565 21:18:55 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:33.565 [2024-06-07 21:18:56.146629] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:33.565 [2024-06-07 21:18:56.146665] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:33.565 [2024-06-07 21:18:56.146795] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:33.565 [2024-06-07 21:18:56.146875] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:33.565 [2024-06-07 21:18:56.146889] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:22:33.565 21:18:56 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.565 21:18:56 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:33.824 21:18:56 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:33.824 21:18:56 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:33.824 21:18:56 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:33.824 21:18:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:33.824 21:18:56 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:33.824 21:18:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:33.824 21:18:56 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:33.824 21:18:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:33.824 21:18:56 -- bdev/nbd_common.sh@12 -- # local i 00:22:33.824 21:18:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:33.824 21:18:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:33.824 21:18:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:34.083 /dev/nbd0 00:22:34.083 21:18:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:34.083 21:18:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:34.083 21:18:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:34.083 21:18:56 -- common/autotest_common.sh@857 -- # local i 00:22:34.083 21:18:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:34.083 21:18:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:34.083 21:18:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:34.083 21:18:56 -- common/autotest_common.sh@861 -- # break 00:22:34.083 21:18:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:34.083 21:18:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:34.083 21:18:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:34.083 1+0 records in 00:22:34.083 1+0 records out 00:22:34.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479696 s, 8.5 MB/s 00:22:34.083 21:18:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:34.083 21:18:56 -- common/autotest_common.sh@874 -- # size=4096 00:22:34.083 21:18:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:34.083 21:18:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:34.083 21:18:56 -- common/autotest_common.sh@877 -- # return 0 00:22:34.083 21:18:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:34.083 21:18:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:34.083 21:18:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:34.342 /dev/nbd1 00:22:34.342 21:18:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:34.342 21:18:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:34.342 21:18:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:34.342 21:18:56 -- common/autotest_common.sh@857 -- # local i 00:22:34.342 21:18:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:34.342 21:18:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:34.342 21:18:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:34.342 21:18:56 -- common/autotest_common.sh@861 -- # break 00:22:34.342 21:18:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:34.342 21:18:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:34.342 21:18:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:34.342 1+0 records in 00:22:34.342 1+0 records out 00:22:34.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275764 s, 14.9 MB/s 00:22:34.342 21:18:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:34.342 21:18:56 -- common/autotest_common.sh@874 -- # size=4096 00:22:34.342 21:18:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:34.342 21:18:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:34.342 21:18:56 -- common/autotest_common.sh@877 -- # return 0 00:22:34.342 21:18:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:34.342 21:18:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:34.342 21:18:56 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:34.600 21:18:57 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:34.600 21:18:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:34.600 21:18:57 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:34.600 21:18:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:34.600 21:18:57 -- bdev/nbd_common.sh@51 -- # local i 00:22:34.600 21:18:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:34.600 21:18:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:34.858 21:18:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:34.858 21:18:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:34.858 21:18:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:34.859 21:18:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:34.859 21:18:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:34.859 21:18:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:34.859 21:18:57 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:34.859 21:18:57 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:34.859 21:18:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:34.859 21:18:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:34.859 21:18:57 -- bdev/nbd_common.sh@41 -- # break 00:22:34.859 21:18:57 -- bdev/nbd_common.sh@45 -- # return 0 00:22:34.859 21:18:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:34.859 21:18:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:35.116 21:18:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:35.116 21:18:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:35.116 21:18:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:35.116 21:18:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:35.116 21:18:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:35.116 21:18:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:35.116 21:18:57 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:35.116 21:18:57 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:35.116 21:18:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:35.116 21:18:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:35.116 21:18:57 -- bdev/nbd_common.sh@41 -- # break 00:22:35.116 21:18:57 -- bdev/nbd_common.sh@45 -- # return 0 00:22:35.116 21:18:57 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:35.116 21:18:57 -- bdev/bdev_raid.sh@709 -- # killprocess 142148 00:22:35.116 21:18:57 -- common/autotest_common.sh@926 -- # '[' -z 142148 ']' 00:22:35.116 21:18:57 -- common/autotest_common.sh@930 -- # kill -0 142148 00:22:35.116 21:18:57 -- common/autotest_common.sh@931 -- # uname 00:22:35.116 21:18:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:35.116 21:18:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142148 00:22:35.116 killing process with pid 142148 00:22:35.116 Received shutdown signal, test time was about 60.000000 seconds 00:22:35.116 00:22:35.116 Latency(us) 00:22:35.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.116 =================================================================================================================== 00:22:35.116 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:35.116 21:18:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:35.116 21:18:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:35.116 21:18:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142148' 00:22:35.116 21:18:57 -- common/autotest_common.sh@945 -- # kill 142148 00:22:35.116 21:18:57 -- common/autotest_common.sh@950 -- # wait 142148 00:22:35.116 [2024-06-07 21:18:57.781576] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:35.374 [2024-06-07 21:18:57.818719] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:35.632 ************************************ 00:22:35.632 END TEST raid5f_rebuild_test 00:22:35.632 ************************************ 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:35.632 00:22:35.632 real 0m19.797s 00:22:35.632 user 0m30.500s 00:22:35.632 sys 0m2.332s 00:22:35.632 21:18:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:35.632 21:18:58 -- common/autotest_common.sh@10 -- # set +x 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:22:35.632 21:18:58 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:35.632 21:18:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:35.632 21:18:58 -- common/autotest_common.sh@10 -- # set +x 00:22:35.632 ************************************ 00:22:35.632 START TEST raid5f_rebuild_test_sb 00:22:35.632 ************************************ 00:22:35.632 21:18:58 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 true false 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@544 -- # raid_pid=142717 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@545 -- # waitforlisten 142717 /var/tmp/spdk-raid.sock 00:22:35.632 21:18:58 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:35.632 21:18:58 -- common/autotest_common.sh@819 -- # '[' -z 142717 ']' 00:22:35.632 21:18:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:35.632 21:18:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:35.632 21:18:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:35.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:35.632 21:18:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:35.632 21:18:58 -- common/autotest_common.sh@10 -- # set +x 00:22:35.632 [2024-06-07 21:18:58.174206] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:35.632 [2024-06-07 21:18:58.174446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142717 ] 00:22:35.632 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:35.632 Zero copy mechanism will not be used. 00:22:35.913 [2024-06-07 21:18:58.340285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.913 [2024-06-07 21:18:58.400012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.913 [2024-06-07 21:18:58.455884] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:36.486 21:18:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:36.486 21:18:59 -- common/autotest_common.sh@852 -- # return 0 00:22:36.486 21:18:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:36.486 21:18:59 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:36.486 21:18:59 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:36.745 BaseBdev1_malloc 00:22:36.745 21:18:59 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:37.003 [2024-06-07 21:18:59.522494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:37.003 [2024-06-07 21:18:59.522637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.003 [2024-06-07 21:18:59.522697] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:37.003 [2024-06-07 21:18:59.522759] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.003 [2024-06-07 21:18:59.525515] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.003 [2024-06-07 21:18:59.525583] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:37.003 BaseBdev1 00:22:37.003 21:18:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:37.003 21:18:59 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:37.003 21:18:59 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:37.261 BaseBdev2_malloc 00:22:37.261 21:18:59 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:37.519 [2024-06-07 21:18:59.969294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:37.519 [2024-06-07 21:18:59.969397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.519 [2024-06-07 21:18:59.969446] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:37.519 [2024-06-07 21:18:59.969497] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.519 [2024-06-07 21:18:59.971776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.519 [2024-06-07 21:18:59.971854] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:37.519 BaseBdev2 00:22:37.519 21:18:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:37.519 21:18:59 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:37.519 21:18:59 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:37.519 BaseBdev3_malloc 00:22:37.778 21:19:00 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:37.778 [2024-06-07 21:19:00.384395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:37.778 [2024-06-07 21:19:00.384493] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.778 [2024-06-07 21:19:00.384536] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:37.778 [2024-06-07 21:19:00.384583] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.778 [2024-06-07 21:19:00.386748] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.778 [2024-06-07 21:19:00.386815] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:37.778 BaseBdev3 00:22:37.778 21:19:00 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:38.037 spare_malloc 00:22:38.037 21:19:00 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:38.296 spare_delay 00:22:38.296 21:19:00 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:38.554 [2024-06-07 21:19:01.107661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:38.554 [2024-06-07 21:19:01.107759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.554 [2024-06-07 21:19:01.107799] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:38.554 [2024-06-07 21:19:01.107841] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.554 [2024-06-07 21:19:01.110121] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.554 [2024-06-07 21:19:01.110199] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:38.554 spare 00:22:38.554 21:19:01 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:22:38.812 [2024-06-07 21:19:01.303833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:38.813 [2024-06-07 21:19:01.305612] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:38.813 [2024-06-07 21:19:01.305699] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:38.813 [2024-06-07 21:19:01.305953] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:22:38.813 [2024-06-07 21:19:01.305988] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:38.813 [2024-06-07 21:19:01.306128] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:22:38.813 [2024-06-07 21:19:01.306886] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:22:38.813 [2024-06-07 21:19:01.306910] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:22:38.813 [2024-06-07 21:19:01.307075] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.813 21:19:01 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:38.813 21:19:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:38.813 21:19:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:38.813 21:19:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:38.813 21:19:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:38.813 21:19:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:38.813 21:19:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:38.813 21:19:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:38.813 21:19:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:38.813 21:19:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:38.813 21:19:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.813 21:19:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.071 21:19:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:39.071 "name": "raid_bdev1", 00:22:39.071 "uuid": "b685d780-b00b-475e-bd56-42d7212f5dbd", 00:22:39.071 "strip_size_kb": 64, 00:22:39.071 "state": "online", 00:22:39.071 "raid_level": "raid5f", 00:22:39.071 "superblock": true, 00:22:39.071 "num_base_bdevs": 3, 00:22:39.071 "num_base_bdevs_discovered": 3, 00:22:39.071 "num_base_bdevs_operational": 3, 00:22:39.071 "base_bdevs_list": [ 00:22:39.071 { 00:22:39.071 "name": "BaseBdev1", 00:22:39.071 "uuid": "d74ab100-dee9-5f0c-a03b-fdf9c2750a5d", 00:22:39.071 "is_configured": true, 00:22:39.071 "data_offset": 2048, 00:22:39.071 "data_size": 63488 00:22:39.071 }, 00:22:39.071 { 00:22:39.071 "name": "BaseBdev2", 00:22:39.071 "uuid": "d2998870-3986-544d-9f3f-53273257777f", 00:22:39.071 "is_configured": true, 00:22:39.071 "data_offset": 2048, 00:22:39.071 "data_size": 63488 00:22:39.071 }, 00:22:39.071 { 00:22:39.071 "name": "BaseBdev3", 00:22:39.071 "uuid": "538223cf-875e-5f6c-84fd-3f7a6b9fabe5", 00:22:39.071 "is_configured": true, 00:22:39.071 "data_offset": 2048, 00:22:39.071 "data_size": 63488 00:22:39.071 } 00:22:39.071 ] 00:22:39.071 }' 00:22:39.071 21:19:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:39.071 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:22:39.637 21:19:02 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:39.637 21:19:02 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:39.895 [2024-06-07 21:19:02.445403] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:39.895 21:19:02 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:22:39.895 21:19:02 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.895 21:19:02 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:40.153 21:19:02 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:40.153 21:19:02 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:40.153 21:19:02 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:40.153 21:19:02 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:40.153 21:19:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:40.153 21:19:02 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:40.153 21:19:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:40.153 21:19:02 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:40.153 21:19:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:40.153 21:19:02 -- bdev/nbd_common.sh@12 -- # local i 00:22:40.153 21:19:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:40.153 21:19:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:40.153 21:19:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:40.411 [2024-06-07 21:19:02.845324] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:40.411 /dev/nbd0 00:22:40.411 21:19:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:40.411 21:19:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:40.411 21:19:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:40.411 21:19:02 -- common/autotest_common.sh@857 -- # local i 00:22:40.411 21:19:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:40.411 21:19:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:40.411 21:19:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:40.411 21:19:02 -- common/autotest_common.sh@861 -- # break 00:22:40.411 21:19:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:40.411 21:19:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:40.412 21:19:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:40.412 1+0 records in 00:22:40.412 1+0 records out 00:22:40.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357636 s, 11.5 MB/s 00:22:40.412 21:19:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:40.412 21:19:02 -- common/autotest_common.sh@874 -- # size=4096 00:22:40.412 21:19:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:40.412 21:19:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:40.412 21:19:02 -- common/autotest_common.sh@877 -- # return 0 00:22:40.412 21:19:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:40.412 21:19:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:40.412 21:19:02 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:22:40.412 21:19:02 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:22:40.412 21:19:02 -- bdev/bdev_raid.sh@582 -- # echo 128 00:22:40.412 21:19:02 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:22:40.669 496+0 records in 00:22:40.669 496+0 records out 00:22:40.669 65011712 bytes (65 MB, 62 MiB) copied, 0.340109 s, 191 MB/s 00:22:40.669 21:19:03 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:40.669 21:19:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:40.669 21:19:03 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:40.669 21:19:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:40.669 21:19:03 -- bdev/nbd_common.sh@51 -- # local i 00:22:40.669 21:19:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:40.669 21:19:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:40.926 21:19:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:40.926 21:19:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:40.926 21:19:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:40.926 21:19:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:40.926 21:19:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:40.926 21:19:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:40.926 21:19:03 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:40.926 [2024-06-07 21:19:03.525272] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.183 21:19:03 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:41.183 21:19:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:41.183 21:19:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:41.183 21:19:03 -- bdev/nbd_common.sh@41 -- # break 00:22:41.183 21:19:03 -- bdev/nbd_common.sh@45 -- # return 0 00:22:41.183 21:19:03 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:41.441 [2024-06-07 21:19:03.869004] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:41.441 21:19:03 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:41.441 21:19:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:41.441 21:19:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:41.441 21:19:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:41.441 21:19:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:41.441 21:19:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:41.441 21:19:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:41.441 21:19:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:41.441 21:19:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:41.441 21:19:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:41.441 21:19:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.441 21:19:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.441 21:19:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:41.441 "name": "raid_bdev1", 00:22:41.441 "uuid": "b685d780-b00b-475e-bd56-42d7212f5dbd", 00:22:41.441 "strip_size_kb": 64, 00:22:41.441 "state": "online", 00:22:41.441 "raid_level": "raid5f", 00:22:41.441 "superblock": true, 00:22:41.441 "num_base_bdevs": 3, 00:22:41.441 "num_base_bdevs_discovered": 2, 00:22:41.441 "num_base_bdevs_operational": 2, 00:22:41.441 "base_bdevs_list": [ 00:22:41.441 { 00:22:41.441 "name": null, 00:22:41.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.441 "is_configured": false, 00:22:41.441 "data_offset": 2048, 00:22:41.441 "data_size": 63488 00:22:41.441 }, 00:22:41.441 { 00:22:41.441 "name": "BaseBdev2", 00:22:41.441 "uuid": "d2998870-3986-544d-9f3f-53273257777f", 00:22:41.441 "is_configured": true, 00:22:41.441 "data_offset": 2048, 00:22:41.441 "data_size": 63488 00:22:41.441 }, 00:22:41.441 { 00:22:41.441 "name": "BaseBdev3", 00:22:41.441 "uuid": "538223cf-875e-5f6c-84fd-3f7a6b9fabe5", 00:22:41.441 "is_configured": true, 00:22:41.441 "data_offset": 2048, 00:22:41.441 "data_size": 63488 00:22:41.441 } 00:22:41.441 ] 00:22:41.441 }' 00:22:41.441 21:19:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:41.441 21:19:04 -- common/autotest_common.sh@10 -- # set +x 00:22:42.375 21:19:04 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:42.375 [2024-06-07 21:19:05.029387] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:42.375 [2024-06-07 21:19:05.029462] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:42.375 [2024-06-07 21:19:05.034402] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002acc0 00:22:42.375 [2024-06-07 21:19:05.036904] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:42.375 21:19:05 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:43.750 21:19:06 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:43.750 21:19:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:43.750 21:19:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:43.750 21:19:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:43.750 21:19:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:43.750 21:19:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.750 21:19:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.750 21:19:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:43.750 "name": "raid_bdev1", 00:22:43.750 "uuid": "b685d780-b00b-475e-bd56-42d7212f5dbd", 00:22:43.750 "strip_size_kb": 64, 00:22:43.750 "state": "online", 00:22:43.750 "raid_level": "raid5f", 00:22:43.750 "superblock": true, 00:22:43.750 "num_base_bdevs": 3, 00:22:43.750 "num_base_bdevs_discovered": 3, 00:22:43.750 "num_base_bdevs_operational": 3, 00:22:43.750 "process": { 00:22:43.750 "type": "rebuild", 00:22:43.750 "target": "spare", 00:22:43.750 "progress": { 00:22:43.750 "blocks": 22528, 00:22:43.750 "percent": 17 00:22:43.750 } 00:22:43.750 }, 00:22:43.750 "base_bdevs_list": [ 00:22:43.750 { 00:22:43.750 "name": "spare", 00:22:43.750 "uuid": "407b4fbf-6b6f-554d-8435-68924b14c874", 00:22:43.750 "is_configured": true, 00:22:43.750 "data_offset": 2048, 00:22:43.750 "data_size": 63488 00:22:43.750 }, 00:22:43.750 { 00:22:43.751 "name": "BaseBdev2", 00:22:43.751 "uuid": "d2998870-3986-544d-9f3f-53273257777f", 00:22:43.751 "is_configured": true, 00:22:43.751 "data_offset": 2048, 00:22:43.751 "data_size": 63488 00:22:43.751 }, 00:22:43.751 { 00:22:43.751 "name": "BaseBdev3", 00:22:43.751 "uuid": "538223cf-875e-5f6c-84fd-3f7a6b9fabe5", 00:22:43.751 "is_configured": true, 00:22:43.751 "data_offset": 2048, 00:22:43.751 "data_size": 63488 00:22:43.751 } 00:22:43.751 ] 00:22:43.751 }' 00:22:43.751 21:19:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:43.751 21:19:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:43.751 21:19:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:43.751 21:19:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:43.751 21:19:06 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:44.010 [2024-06-07 21:19:06.607138] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:44.010 [2024-06-07 21:19:06.653129] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:44.010 [2024-06-07 21:19:06.653298] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.010 21:19:06 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:44.010 21:19:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:44.010 21:19:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:44.010 21:19:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:44.010 21:19:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:44.010 21:19:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:44.010 21:19:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:44.010 21:19:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:44.010 21:19:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:44.010 21:19:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:44.010 21:19:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.010 21:19:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.577 21:19:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:44.577 "name": "raid_bdev1", 00:22:44.577 "uuid": "b685d780-b00b-475e-bd56-42d7212f5dbd", 00:22:44.577 "strip_size_kb": 64, 00:22:44.577 "state": "online", 00:22:44.577 "raid_level": "raid5f", 00:22:44.577 "superblock": true, 00:22:44.577 "num_base_bdevs": 3, 00:22:44.577 "num_base_bdevs_discovered": 2, 00:22:44.577 "num_base_bdevs_operational": 2, 00:22:44.577 "base_bdevs_list": [ 00:22:44.577 { 00:22:44.577 "name": null, 00:22:44.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.577 "is_configured": false, 00:22:44.577 "data_offset": 2048, 00:22:44.577 "data_size": 63488 00:22:44.577 }, 00:22:44.577 { 00:22:44.577 "name": "BaseBdev2", 00:22:44.577 "uuid": "d2998870-3986-544d-9f3f-53273257777f", 00:22:44.577 "is_configured": true, 00:22:44.577 "data_offset": 2048, 00:22:44.577 "data_size": 63488 00:22:44.577 }, 00:22:44.577 { 00:22:44.577 "name": "BaseBdev3", 00:22:44.577 "uuid": "538223cf-875e-5f6c-84fd-3f7a6b9fabe5", 00:22:44.577 "is_configured": true, 00:22:44.577 "data_offset": 2048, 00:22:44.577 "data_size": 63488 00:22:44.577 } 00:22:44.577 ] 00:22:44.577 }' 00:22:44.577 21:19:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:44.577 21:19:06 -- common/autotest_common.sh@10 -- # set +x 00:22:45.143 21:19:07 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:45.143 21:19:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:45.143 21:19:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:45.143 21:19:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:45.143 21:19:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:45.143 21:19:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.143 21:19:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.402 21:19:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:45.402 "name": "raid_bdev1", 00:22:45.402 "uuid": "b685d780-b00b-475e-bd56-42d7212f5dbd", 00:22:45.402 "strip_size_kb": 64, 00:22:45.402 "state": "online", 00:22:45.402 "raid_level": "raid5f", 00:22:45.402 "superblock": true, 00:22:45.402 "num_base_bdevs": 3, 00:22:45.402 "num_base_bdevs_discovered": 2, 00:22:45.402 "num_base_bdevs_operational": 2, 00:22:45.402 "base_bdevs_list": [ 00:22:45.402 { 00:22:45.402 "name": null, 00:22:45.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.402 "is_configured": false, 00:22:45.402 "data_offset": 2048, 00:22:45.402 "data_size": 63488 00:22:45.402 }, 00:22:45.402 { 00:22:45.402 "name": "BaseBdev2", 00:22:45.402 "uuid": "d2998870-3986-544d-9f3f-53273257777f", 00:22:45.402 "is_configured": true, 00:22:45.402 "data_offset": 2048, 00:22:45.402 "data_size": 63488 00:22:45.402 }, 00:22:45.402 { 00:22:45.402 "name": "BaseBdev3", 00:22:45.402 "uuid": "538223cf-875e-5f6c-84fd-3f7a6b9fabe5", 00:22:45.402 "is_configured": true, 00:22:45.402 "data_offset": 2048, 00:22:45.402 "data_size": 63488 00:22:45.402 } 00:22:45.402 ] 00:22:45.402 }' 00:22:45.402 21:19:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:45.402 21:19:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:45.402 21:19:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:45.402 21:19:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:45.402 21:19:07 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:45.692 [2024-06-07 21:19:08.192372] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:45.692 [2024-06-07 21:19:08.192436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:45.692 [2024-06-07 21:19:08.197509] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ae60 00:22:45.692 [2024-06-07 21:19:08.199969] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:45.692 21:19:08 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:46.628 21:19:09 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:46.628 21:19:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:46.628 21:19:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:46.628 21:19:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:46.628 21:19:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:46.628 21:19:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.628 21:19:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:46.886 "name": "raid_bdev1", 00:22:46.886 "uuid": "b685d780-b00b-475e-bd56-42d7212f5dbd", 00:22:46.886 "strip_size_kb": 64, 00:22:46.886 "state": "online", 00:22:46.886 "raid_level": "raid5f", 00:22:46.886 "superblock": true, 00:22:46.886 "num_base_bdevs": 3, 00:22:46.886 "num_base_bdevs_discovered": 3, 00:22:46.886 "num_base_bdevs_operational": 3, 00:22:46.886 "process": { 00:22:46.886 "type": "rebuild", 00:22:46.886 "target": "spare", 00:22:46.886 "progress": { 00:22:46.886 "blocks": 22528, 00:22:46.886 "percent": 17 00:22:46.886 } 00:22:46.886 }, 00:22:46.886 "base_bdevs_list": [ 00:22:46.886 { 00:22:46.886 "name": "spare", 00:22:46.886 "uuid": "407b4fbf-6b6f-554d-8435-68924b14c874", 00:22:46.886 "is_configured": true, 00:22:46.886 "data_offset": 2048, 00:22:46.886 "data_size": 63488 00:22:46.886 }, 00:22:46.886 { 00:22:46.886 "name": "BaseBdev2", 00:22:46.886 "uuid": "d2998870-3986-544d-9f3f-53273257777f", 00:22:46.886 "is_configured": true, 00:22:46.886 "data_offset": 2048, 00:22:46.886 "data_size": 63488 00:22:46.886 }, 00:22:46.886 { 00:22:46.886 "name": "BaseBdev3", 00:22:46.886 "uuid": "538223cf-875e-5f6c-84fd-3f7a6b9fabe5", 00:22:46.886 "is_configured": true, 00:22:46.886 "data_offset": 2048, 00:22:46.886 "data_size": 63488 00:22:46.886 } 00:22:46.886 ] 00:22:46.886 }' 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:46.886 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@657 -- # local timeout=609 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.886 21:19:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.145 21:19:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:47.145 "name": "raid_bdev1", 00:22:47.145 "uuid": "b685d780-b00b-475e-bd56-42d7212f5dbd", 00:22:47.145 "strip_size_kb": 64, 00:22:47.145 "state": "online", 00:22:47.145 "raid_level": "raid5f", 00:22:47.145 "superblock": true, 00:22:47.145 "num_base_bdevs": 3, 00:22:47.145 "num_base_bdevs_discovered": 3, 00:22:47.145 "num_base_bdevs_operational": 3, 00:22:47.145 "process": { 00:22:47.145 "type": "rebuild", 00:22:47.145 "target": "spare", 00:22:47.145 "progress": { 00:22:47.145 "blocks": 30720, 00:22:47.145 "percent": 24 00:22:47.145 } 00:22:47.145 }, 00:22:47.145 "base_bdevs_list": [ 00:22:47.145 { 00:22:47.145 "name": "spare", 00:22:47.145 "uuid": "407b4fbf-6b6f-554d-8435-68924b14c874", 00:22:47.145 "is_configured": true, 00:22:47.145 "data_offset": 2048, 00:22:47.145 "data_size": 63488 00:22:47.145 }, 00:22:47.145 { 00:22:47.145 "name": "BaseBdev2", 00:22:47.145 "uuid": "d2998870-3986-544d-9f3f-53273257777f", 00:22:47.145 "is_configured": true, 00:22:47.145 "data_offset": 2048, 00:22:47.145 "data_size": 63488 00:22:47.145 }, 00:22:47.145 { 00:22:47.145 "name": "BaseBdev3", 00:22:47.145 "uuid": "538223cf-875e-5f6c-84fd-3f7a6b9fabe5", 00:22:47.145 "is_configured": true, 00:22:47.145 "data_offset": 2048, 00:22:47.145 "data_size": 63488 00:22:47.145 } 00:22:47.145 ] 00:22:47.145 }' 00:22:47.145 21:19:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:47.145 21:19:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:47.145 21:19:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:47.403 21:19:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:47.403 21:19:09 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:48.338 21:19:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:48.338 21:19:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.338 21:19:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:48.338 21:19:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:48.338 21:19:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:48.338 21:19:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:48.338 21:19:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.338 21:19:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.596 21:19:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:48.596 "name": "raid_bdev1", 00:22:48.596 "uuid": "b685d780-b00b-475e-bd56-42d7212f5dbd", 00:22:48.596 "strip_size_kb": 64, 00:22:48.596 "state": "online", 00:22:48.596 "raid_level": "raid5f", 00:22:48.596 "superblock": true, 00:22:48.596 "num_base_bdevs": 3, 00:22:48.597 "num_base_bdevs_discovered": 3, 00:22:48.597 "num_base_bdevs_operational": 3, 00:22:48.597 "process": { 00:22:48.597 "type": "rebuild", 00:22:48.597 "target": "spare", 00:22:48.597 "progress": { 00:22:48.597 "blocks": 57344, 00:22:48.597 "percent": 45 00:22:48.597 } 00:22:48.597 }, 00:22:48.597 "base_bdevs_list": [ 00:22:48.597 { 00:22:48.597 "name": "spare", 00:22:48.597 "uuid": "407b4fbf-6b6f-554d-8435-68924b14c874", 00:22:48.597 "is_configured": true, 00:22:48.597 "data_offset": 2048, 00:22:48.597 "data_size": 63488 00:22:48.597 }, 00:22:48.597 { 00:22:48.597 "name": "BaseBdev2", 00:22:48.597 "uuid": "d2998870-3986-544d-9f3f-53273257777f", 00:22:48.597 "is_configured": true, 00:22:48.597 "data_offset": 2048, 00:22:48.597 "data_size": 63488 00:22:48.597 }, 00:22:48.597 { 00:22:48.597 "name": "BaseBdev3", 00:22:48.597 "uuid": "538223cf-875e-5f6c-84fd-3f7a6b9fabe5", 00:22:48.597 "is_configured": true, 00:22:48.597 "data_offset": 2048, 00:22:48.597 "data_size": 63488 00:22:48.597 } 00:22:48.597 ] 00:22:48.597 }' 00:22:48.597 21:19:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:48.597 21:19:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.597 21:19:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:48.597 21:19:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.597 21:19:11 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:49.970 21:19:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:49.970 21:19:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.970 21:19:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:49.970 21:19:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:49.970 21:19:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:49.970 21:19:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:49.970 21:19:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.970 21:19:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.970 21:19:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:49.970 "name": "raid_bdev1", 00:22:49.970 "uuid": "b685d780-b00b-475e-bd56-42d7212f5dbd", 00:22:49.970 "strip_size_kb": 64, 00:22:49.970 "state": "online", 00:22:49.971 "raid_level": "raid5f", 00:22:49.971 "superblock": true, 00:22:49.971 "num_base_bdevs": 3, 00:22:49.971 "num_base_bdevs_discovered": 3, 00:22:49.971 "num_base_bdevs_operational": 3, 00:22:49.971 "process": { 00:22:49.971 "type": "rebuild", 00:22:49.971 "target": "spare", 00:22:49.971 "progress": { 00:22:49.971 "blocks": 83968, 00:22:49.971 "percent": 66 00:22:49.971 } 00:22:49.971 }, 00:22:49.971 "base_bdevs_list": [ 00:22:49.971 { 00:22:49.971 "name": "spare", 00:22:49.971 "uuid": "407b4fbf-6b6f-554d-8435-68924b14c874", 00:22:49.971 "is_configured": true, 00:22:49.971 "data_offset": 2048, 00:22:49.971 "data_size": 63488 00:22:49.971 }, 00:22:49.971 { 00:22:49.971 "name": "BaseBdev2", 00:22:49.971 "uuid": "d2998870-3986-544d-9f3f-53273257777f", 00:22:49.971 "is_configured": true, 00:22:49.971 "data_offset": 2048, 00:22:49.971 "data_size": 63488 00:22:49.971 }, 00:22:49.971 { 00:22:49.971 "name": "BaseBdev3", 00:22:49.971 "uuid": "538223cf-875e-5f6c-84fd-3f7a6b9fabe5", 00:22:49.971 "is_configured": true, 00:22:49.971 "data_offset": 2048, 00:22:49.971 "data_size": 63488 00:22:49.971 } 00:22:49.971 ] 00:22:49.971 }' 00:22:49.971 21:19:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:49.971 21:19:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.971 21:19:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:49.971 21:19:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.971 21:19:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:50.915 21:19:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:50.915 21:19:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:50.915 21:19:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:50.915 21:19:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:50.915 21:19:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:50.915 21:19:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:50.915 21:19:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.915 21:19:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.173 21:19:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:51.173 "name": "raid_bdev1", 00:22:51.173 "uuid": "b685d780-b00b-475e-bd56-42d7212f5dbd", 00:22:51.173 "strip_size_kb": 64, 00:22:51.173 "state": "online", 00:22:51.173 "raid_level": "raid5f", 00:22:51.173 "superblock": true, 00:22:51.173 "num_base_bdevs": 3, 00:22:51.173 "num_base_bdevs_discovered": 3, 00:22:51.173 "num_base_bdevs_operational": 3, 00:22:51.173 "process": { 00:22:51.173 "type": "rebuild", 00:22:51.173 "target": "spare", 00:22:51.173 "progress": { 00:22:51.173 "blocks": 112640, 00:22:51.173 "percent": 88 00:22:51.173 } 00:22:51.173 }, 00:22:51.173 "base_bdevs_list": [ 00:22:51.173 { 00:22:51.173 "name": "spare", 00:22:51.173 "uuid": "407b4fbf-6b6f-554d-8435-68924b14c874", 00:22:51.173 "is_configured": true, 00:22:51.173 "data_offset": 2048, 00:22:51.173 "data_size": 63488 00:22:51.173 }, 00:22:51.173 { 00:22:51.173 "name": "BaseBdev2", 00:22:51.173 "uuid": "d2998870-3986-544d-9f3f-53273257777f", 00:22:51.173 "is_configured": true, 00:22:51.173 "data_offset": 2048, 00:22:51.173 "data_size": 63488 00:22:51.173 }, 00:22:51.173 { 00:22:51.173 "name": "BaseBdev3", 00:22:51.173 "uuid": "538223cf-875e-5f6c-84fd-3f7a6b9fabe5", 00:22:51.173 "is_configured": true, 00:22:51.173 "data_offset": 2048, 00:22:51.173 "data_size": 63488 00:22:51.173 } 00:22:51.173 ] 00:22:51.173 }' 00:22:51.173 21:19:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:51.431 21:19:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.431 21:19:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:51.431 21:19:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.431 21:19:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:51.997 [2024-06-07 21:19:14.455295] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:51.997 [2024-06-07 21:19:14.455376] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:51.997 [2024-06-07 21:19:14.455569] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.255 21:19:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:52.255 21:19:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.255 21:19:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:52.255 21:19:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:52.255 21:19:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:52.255 21:19:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:52.255 21:19:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.255 21:19:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.513 21:19:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:52.513 "name": "raid_bdev1", 00:22:52.513 "uuid": "b685d780-b00b-475e-bd56-42d7212f5dbd", 00:22:52.513 "strip_size_kb": 64, 00:22:52.513 "state": "online", 00:22:52.513 "raid_level": "raid5f", 00:22:52.513 "superblock": true, 00:22:52.513 "num_base_bdevs": 3, 00:22:52.513 "num_base_bdevs_discovered": 3, 00:22:52.513 "num_base_bdevs_operational": 3, 00:22:52.513 "base_bdevs_list": [ 00:22:52.513 { 00:22:52.513 "name": "spare", 00:22:52.513 "uuid": "407b4fbf-6b6f-554d-8435-68924b14c874", 00:22:52.513 "is_configured": true, 00:22:52.513 "data_offset": 2048, 00:22:52.513 "data_size": 63488 00:22:52.513 }, 00:22:52.513 { 00:22:52.513 "name": "BaseBdev2", 00:22:52.513 "uuid": "d2998870-3986-544d-9f3f-53273257777f", 00:22:52.513 "is_configured": true, 00:22:52.513 "data_offset": 2048, 00:22:52.513 "data_size": 63488 00:22:52.513 }, 00:22:52.513 { 00:22:52.513 "name": "BaseBdev3", 00:22:52.513 "uuid": "538223cf-875e-5f6c-84fd-3f7a6b9fabe5", 00:22:52.513 "is_configured": true, 00:22:52.513 "data_offset": 2048, 00:22:52.513 "data_size": 63488 00:22:52.513 } 00:22:52.513 ] 00:22:52.513 }' 00:22:52.513 21:19:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:52.771 21:19:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:52.771 21:19:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:52.771 21:19:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:52.771 21:19:15 -- bdev/bdev_raid.sh@660 -- # break 00:22:52.771 21:19:15 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:52.771 21:19:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:52.771 21:19:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:52.771 21:19:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:52.771 21:19:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:52.772 21:19:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.772 21:19:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:53.030 "name": "raid_bdev1", 00:22:53.030 "uuid": "b685d780-b00b-475e-bd56-42d7212f5dbd", 00:22:53.030 "strip_size_kb": 64, 00:22:53.030 "state": "online", 00:22:53.030 "raid_level": "raid5f", 00:22:53.030 "superblock": true, 00:22:53.030 "num_base_bdevs": 3, 00:22:53.030 "num_base_bdevs_discovered": 3, 00:22:53.030 "num_base_bdevs_operational": 3, 00:22:53.030 "base_bdevs_list": [ 00:22:53.030 { 00:22:53.030 "name": "spare", 00:22:53.030 "uuid": "407b4fbf-6b6f-554d-8435-68924b14c874", 00:22:53.030 "is_configured": true, 00:22:53.030 "data_offset": 2048, 00:22:53.030 "data_size": 63488 00:22:53.030 }, 00:22:53.030 { 00:22:53.030 "name": "BaseBdev2", 00:22:53.030 "uuid": "d2998870-3986-544d-9f3f-53273257777f", 00:22:53.030 "is_configured": true, 00:22:53.030 "data_offset": 2048, 00:22:53.030 "data_size": 63488 00:22:53.030 }, 00:22:53.030 { 00:22:53.030 "name": "BaseBdev3", 00:22:53.030 "uuid": "538223cf-875e-5f6c-84fd-3f7a6b9fabe5", 00:22:53.030 "is_configured": true, 00:22:53.030 "data_offset": 2048, 00:22:53.030 "data_size": 63488 00:22:53.030 } 00:22:53.030 ] 00:22:53.030 }' 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.030 21:19:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.289 21:19:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:53.289 "name": "raid_bdev1", 00:22:53.289 "uuid": "b685d780-b00b-475e-bd56-42d7212f5dbd", 00:22:53.289 "strip_size_kb": 64, 00:22:53.289 "state": "online", 00:22:53.289 "raid_level": "raid5f", 00:22:53.289 "superblock": true, 00:22:53.289 "num_base_bdevs": 3, 00:22:53.289 "num_base_bdevs_discovered": 3, 00:22:53.289 "num_base_bdevs_operational": 3, 00:22:53.289 "base_bdevs_list": [ 00:22:53.289 { 00:22:53.289 "name": "spare", 00:22:53.289 "uuid": "407b4fbf-6b6f-554d-8435-68924b14c874", 00:22:53.289 "is_configured": true, 00:22:53.289 "data_offset": 2048, 00:22:53.289 "data_size": 63488 00:22:53.289 }, 00:22:53.289 { 00:22:53.289 "name": "BaseBdev2", 00:22:53.289 "uuid": "d2998870-3986-544d-9f3f-53273257777f", 00:22:53.289 "is_configured": true, 00:22:53.289 "data_offset": 2048, 00:22:53.289 "data_size": 63488 00:22:53.289 }, 00:22:53.289 { 00:22:53.289 "name": "BaseBdev3", 00:22:53.289 "uuid": "538223cf-875e-5f6c-84fd-3f7a6b9fabe5", 00:22:53.289 "is_configured": true, 00:22:53.289 "data_offset": 2048, 00:22:53.289 "data_size": 63488 00:22:53.289 } 00:22:53.289 ] 00:22:53.289 }' 00:22:53.289 21:19:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:53.289 21:19:15 -- common/autotest_common.sh@10 -- # set +x 00:22:53.855 21:19:16 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:54.114 [2024-06-07 21:19:16.718260] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:54.114 [2024-06-07 21:19:16.718305] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:54.114 [2024-06-07 21:19:16.718448] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:54.114 [2024-06-07 21:19:16.718570] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:54.114 [2024-06-07 21:19:16.718585] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:22:54.114 21:19:16 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.114 21:19:16 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:54.373 21:19:16 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:54.373 21:19:16 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:54.373 21:19:16 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:54.373 21:19:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:54.373 21:19:16 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:54.373 21:19:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:54.373 21:19:16 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:54.373 21:19:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:54.373 21:19:16 -- bdev/nbd_common.sh@12 -- # local i 00:22:54.373 21:19:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:54.373 21:19:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:54.373 21:19:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:54.631 /dev/nbd0 00:22:54.631 21:19:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:54.631 21:19:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:54.631 21:19:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:54.631 21:19:17 -- common/autotest_common.sh@857 -- # local i 00:22:54.631 21:19:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:54.631 21:19:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:54.631 21:19:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:54.631 21:19:17 -- common/autotest_common.sh@861 -- # break 00:22:54.631 21:19:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:54.631 21:19:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:54.631 21:19:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:54.631 1+0 records in 00:22:54.631 1+0 records out 00:22:54.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157439 s, 26.0 MB/s 00:22:54.631 21:19:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.631 21:19:17 -- common/autotest_common.sh@874 -- # size=4096 00:22:54.631 21:19:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.631 21:19:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:54.631 21:19:17 -- common/autotest_common.sh@877 -- # return 0 00:22:54.631 21:19:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:54.631 21:19:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:54.631 21:19:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:54.890 /dev/nbd1 00:22:54.890 21:19:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:54.890 21:19:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:54.890 21:19:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:54.890 21:19:17 -- common/autotest_common.sh@857 -- # local i 00:22:54.890 21:19:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:54.890 21:19:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:54.890 21:19:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:54.890 21:19:17 -- common/autotest_common.sh@861 -- # break 00:22:54.890 21:19:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:54.890 21:19:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:54.890 21:19:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:54.890 1+0 records in 00:22:54.890 1+0 records out 00:22:54.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470576 s, 8.7 MB/s 00:22:54.890 21:19:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.890 21:19:17 -- common/autotest_common.sh@874 -- # size=4096 00:22:54.890 21:19:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.890 21:19:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:54.890 21:19:17 -- common/autotest_common.sh@877 -- # return 0 00:22:54.890 21:19:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:54.890 21:19:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:54.890 21:19:17 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:54.890 21:19:17 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:54.890 21:19:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:54.890 21:19:17 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:54.890 21:19:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:54.890 21:19:17 -- bdev/nbd_common.sh@51 -- # local i 00:22:54.890 21:19:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.890 21:19:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:55.148 21:19:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:55.148 21:19:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:55.148 21:19:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:55.148 21:19:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:55.148 21:19:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:55.148 21:19:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:55.407 21:19:17 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:55.407 21:19:17 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:55.407 21:19:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:55.407 21:19:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:55.407 21:19:17 -- bdev/nbd_common.sh@41 -- # break 00:22:55.407 21:19:17 -- bdev/nbd_common.sh@45 -- # return 0 00:22:55.407 21:19:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:55.407 21:19:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:55.665 21:19:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:55.665 21:19:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:55.665 21:19:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:55.665 21:19:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:55.665 21:19:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:55.665 21:19:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:55.665 21:19:18 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:55.665 21:19:18 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:55.665 21:19:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:55.665 21:19:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:55.665 21:19:18 -- bdev/nbd_common.sh@41 -- # break 00:22:55.665 21:19:18 -- bdev/nbd_common.sh@45 -- # return 0 00:22:55.665 21:19:18 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:55.665 21:19:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:55.665 21:19:18 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:55.665 21:19:18 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:55.923 21:19:18 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:56.181 [2024-06-07 21:19:18.738154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:56.181 [2024-06-07 21:19:18.738315] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.181 [2024-06-07 21:19:18.738355] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:56.181 [2024-06-07 21:19:18.738386] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.181 [2024-06-07 21:19:18.740811] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.182 [2024-06-07 21:19:18.740935] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:56.182 [2024-06-07 21:19:18.741038] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:56.182 [2024-06-07 21:19:18.741115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:56.182 BaseBdev1 00:22:56.182 21:19:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:56.182 21:19:18 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:22:56.182 21:19:18 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:22:56.442 21:19:18 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:56.700 [2024-06-07 21:19:19.146220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:56.700 [2024-06-07 21:19:19.146336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.700 [2024-06-07 21:19:19.146382] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:56.700 [2024-06-07 21:19:19.146405] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.700 [2024-06-07 21:19:19.146882] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.700 [2024-06-07 21:19:19.146941] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:56.700 [2024-06-07 21:19:19.147059] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:22:56.700 [2024-06-07 21:19:19.147075] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:22:56.700 [2024-06-07 21:19:19.147083] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:56.700 [2024-06-07 21:19:19.147123] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state configuring 00:22:56.700 [2024-06-07 21:19:19.147178] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:56.700 BaseBdev2 00:22:56.700 21:19:19 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:56.700 21:19:19 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:56.700 21:19:19 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:56.958 21:19:19 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:56.958 [2024-06-07 21:19:19.622344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:56.958 [2024-06-07 21:19:19.622473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.958 [2024-06-07 21:19:19.622522] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:56.958 [2024-06-07 21:19:19.622546] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.958 [2024-06-07 21:19:19.623039] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.958 [2024-06-07 21:19:19.623102] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:56.958 [2024-06-07 21:19:19.623215] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:56.958 [2024-06-07 21:19:19.623257] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:56.958 BaseBdev3 00:22:57.217 21:19:19 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:57.217 21:19:19 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:57.475 [2024-06-07 21:19:20.038438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:57.475 [2024-06-07 21:19:20.038569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.475 [2024-06-07 21:19:20.038624] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:57.475 [2024-06-07 21:19:20.038655] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.475 [2024-06-07 21:19:20.039185] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.475 [2024-06-07 21:19:20.039276] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:57.475 [2024-06-07 21:19:20.039381] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:57.475 [2024-06-07 21:19:20.039417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:57.475 spare 00:22:57.475 21:19:20 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:57.475 21:19:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:57.475 21:19:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:57.475 21:19:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:57.475 21:19:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:57.475 21:19:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:57.475 21:19:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:57.475 21:19:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:57.475 21:19:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:57.475 21:19:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:57.475 21:19:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.476 21:19:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.476 [2024-06-07 21:19:20.139607] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b780 00:22:57.476 [2024-06-07 21:19:20.139641] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:57.476 [2024-06-07 21:19:20.139808] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004bb40 00:22:57.476 [2024-06-07 21:19:20.140674] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b780 00:22:57.476 [2024-06-07 21:19:20.140695] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b780 00:22:57.476 [2024-06-07 21:19:20.140873] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:57.734 21:19:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:57.734 "name": "raid_bdev1", 00:22:57.734 "uuid": "b685d780-b00b-475e-bd56-42d7212f5dbd", 00:22:57.734 "strip_size_kb": 64, 00:22:57.734 "state": "online", 00:22:57.734 "raid_level": "raid5f", 00:22:57.734 "superblock": true, 00:22:57.734 "num_base_bdevs": 3, 00:22:57.734 "num_base_bdevs_discovered": 3, 00:22:57.734 "num_base_bdevs_operational": 3, 00:22:57.734 "base_bdevs_list": [ 00:22:57.734 { 00:22:57.734 "name": "spare", 00:22:57.734 "uuid": "407b4fbf-6b6f-554d-8435-68924b14c874", 00:22:57.734 "is_configured": true, 00:22:57.734 "data_offset": 2048, 00:22:57.734 "data_size": 63488 00:22:57.734 }, 00:22:57.734 { 00:22:57.734 "name": "BaseBdev2", 00:22:57.734 "uuid": "d2998870-3986-544d-9f3f-53273257777f", 00:22:57.734 "is_configured": true, 00:22:57.734 "data_offset": 2048, 00:22:57.734 "data_size": 63488 00:22:57.734 }, 00:22:57.734 { 00:22:57.734 "name": "BaseBdev3", 00:22:57.734 "uuid": "538223cf-875e-5f6c-84fd-3f7a6b9fabe5", 00:22:57.734 "is_configured": true, 00:22:57.734 "data_offset": 2048, 00:22:57.734 "data_size": 63488 00:22:57.734 } 00:22:57.734 ] 00:22:57.734 }' 00:22:57.734 21:19:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:57.734 21:19:20 -- common/autotest_common.sh@10 -- # set +x 00:22:58.302 21:19:20 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:58.302 21:19:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:58.302 21:19:20 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:58.302 21:19:20 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:58.302 21:19:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:58.302 21:19:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.302 21:19:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.561 21:19:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:58.561 "name": "raid_bdev1", 00:22:58.561 "uuid": "b685d780-b00b-475e-bd56-42d7212f5dbd", 00:22:58.561 "strip_size_kb": 64, 00:22:58.561 "state": "online", 00:22:58.561 "raid_level": "raid5f", 00:22:58.561 "superblock": true, 00:22:58.561 "num_base_bdevs": 3, 00:22:58.561 "num_base_bdevs_discovered": 3, 00:22:58.561 "num_base_bdevs_operational": 3, 00:22:58.561 "base_bdevs_list": [ 00:22:58.561 { 00:22:58.561 "name": "spare", 00:22:58.561 "uuid": "407b4fbf-6b6f-554d-8435-68924b14c874", 00:22:58.561 "is_configured": true, 00:22:58.561 "data_offset": 2048, 00:22:58.561 "data_size": 63488 00:22:58.561 }, 00:22:58.561 { 00:22:58.561 "name": "BaseBdev2", 00:22:58.561 "uuid": "d2998870-3986-544d-9f3f-53273257777f", 00:22:58.561 "is_configured": true, 00:22:58.561 "data_offset": 2048, 00:22:58.561 "data_size": 63488 00:22:58.561 }, 00:22:58.561 { 00:22:58.561 "name": "BaseBdev3", 00:22:58.561 "uuid": "538223cf-875e-5f6c-84fd-3f7a6b9fabe5", 00:22:58.561 "is_configured": true, 00:22:58.561 "data_offset": 2048, 00:22:58.561 "data_size": 63488 00:22:58.561 } 00:22:58.561 ] 00:22:58.561 }' 00:22:58.561 21:19:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:58.561 21:19:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:58.561 21:19:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:58.561 21:19:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:58.561 21:19:21 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.561 21:19:21 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:58.820 21:19:21 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:58.820 21:19:21 -- bdev/bdev_raid.sh@709 -- # killprocess 142717 00:22:58.820 21:19:21 -- common/autotest_common.sh@926 -- # '[' -z 142717 ']' 00:22:58.820 21:19:21 -- common/autotest_common.sh@930 -- # kill -0 142717 00:22:58.820 21:19:21 -- common/autotest_common.sh@931 -- # uname 00:22:58.820 21:19:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:58.820 21:19:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142717 00:22:58.820 killing process with pid 142717 00:22:58.820 Received shutdown signal, test time was about 60.000000 seconds 00:22:58.820 00:22:58.820 Latency(us) 00:22:58.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.820 =================================================================================================================== 00:22:58.820 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:58.820 21:19:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:58.820 21:19:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:58.820 21:19:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142717' 00:22:58.820 21:19:21 -- common/autotest_common.sh@945 -- # kill 142717 00:22:58.820 21:19:21 -- common/autotest_common.sh@950 -- # wait 142717 00:22:58.820 [2024-06-07 21:19:21.421627] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:58.820 [2024-06-07 21:19:21.421738] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:58.820 [2024-06-07 21:19:21.421869] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:58.820 [2024-06-07 21:19:21.421888] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state offline 00:22:58.820 [2024-06-07 21:19:21.458779] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:59.079 ************************************ 00:22:59.079 END TEST raid5f_rebuild_test_sb 00:22:59.079 ************************************ 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:59.079 00:22:59.079 real 0m23.575s 00:22:59.079 user 0m37.857s 00:22:59.079 sys 0m2.703s 00:22:59.079 21:19:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.079 21:19:21 -- common/autotest_common.sh@10 -- # set +x 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:22:59.079 21:19:21 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:22:59.079 21:19:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:59.079 21:19:21 -- common/autotest_common.sh@10 -- # set +x 00:22:59.079 ************************************ 00:22:59.079 START TEST raid5f_state_function_test 00:22:59.079 ************************************ 00:22:59.079 21:19:21 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 false 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:59.079 21:19:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:59.080 21:19:21 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:59.080 21:19:21 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:59.080 21:19:21 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:59.080 21:19:21 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:59.080 21:19:21 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:59.080 21:19:21 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:59.080 21:19:21 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:59.080 21:19:21 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:59.080 21:19:21 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:22:59.080 21:19:21 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:22:59.080 21:19:21 -- bdev/bdev_raid.sh@226 -- # raid_pid=143402 00:22:59.080 Process raid pid: 143402 00:22:59.080 21:19:21 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 143402' 00:22:59.080 21:19:21 -- bdev/bdev_raid.sh@228 -- # waitforlisten 143402 /var/tmp/spdk-raid.sock 00:22:59.080 21:19:21 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:59.080 21:19:21 -- common/autotest_common.sh@819 -- # '[' -z 143402 ']' 00:22:59.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:59.080 21:19:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:59.080 21:19:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:59.080 21:19:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:59.080 21:19:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:59.080 21:19:21 -- common/autotest_common.sh@10 -- # set +x 00:22:59.339 [2024-06-07 21:19:21.787645] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:59.339 [2024-06-07 21:19:21.787828] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.339 [2024-06-07 21:19:21.940745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.597 [2024-06-07 21:19:22.023280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.597 [2024-06-07 21:19:22.081007] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:00.164 21:19:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:00.164 21:19:22 -- common/autotest_common.sh@852 -- # return 0 00:23:00.164 21:19:22 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:00.423 [2024-06-07 21:19:22.990742] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:00.423 [2024-06-07 21:19:22.990848] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:00.423 [2024-06-07 21:19:22.990878] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:00.423 [2024-06-07 21:19:22.990900] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:00.423 [2024-06-07 21:19:22.990908] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:00.423 [2024-06-07 21:19:22.990992] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:00.423 [2024-06-07 21:19:22.991002] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:00.423 [2024-06-07 21:19:22.991025] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:00.423 21:19:23 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:00.423 21:19:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:00.423 21:19:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:00.423 21:19:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:00.423 21:19:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:00.423 21:19:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:00.423 21:19:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:00.423 21:19:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:00.423 21:19:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:00.423 21:19:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:00.423 21:19:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.423 21:19:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:00.681 21:19:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:00.681 "name": "Existed_Raid", 00:23:00.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.681 "strip_size_kb": 64, 00:23:00.681 "state": "configuring", 00:23:00.681 "raid_level": "raid5f", 00:23:00.681 "superblock": false, 00:23:00.681 "num_base_bdevs": 4, 00:23:00.681 "num_base_bdevs_discovered": 0, 00:23:00.681 "num_base_bdevs_operational": 4, 00:23:00.681 "base_bdevs_list": [ 00:23:00.681 { 00:23:00.681 "name": "BaseBdev1", 00:23:00.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.681 "is_configured": false, 00:23:00.681 "data_offset": 0, 00:23:00.681 "data_size": 0 00:23:00.681 }, 00:23:00.681 { 00:23:00.681 "name": "BaseBdev2", 00:23:00.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.681 "is_configured": false, 00:23:00.681 "data_offset": 0, 00:23:00.681 "data_size": 0 00:23:00.681 }, 00:23:00.681 { 00:23:00.681 "name": "BaseBdev3", 00:23:00.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.681 "is_configured": false, 00:23:00.681 "data_offset": 0, 00:23:00.681 "data_size": 0 00:23:00.681 }, 00:23:00.681 { 00:23:00.681 "name": "BaseBdev4", 00:23:00.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.681 "is_configured": false, 00:23:00.681 "data_offset": 0, 00:23:00.681 "data_size": 0 00:23:00.681 } 00:23:00.681 ] 00:23:00.681 }' 00:23:00.681 21:19:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:00.681 21:19:23 -- common/autotest_common.sh@10 -- # set +x 00:23:01.247 21:19:23 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:01.505 [2024-06-07 21:19:24.090761] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:01.505 [2024-06-07 21:19:24.090806] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:01.505 21:19:24 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:01.763 [2024-06-07 21:19:24.274799] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:01.763 [2024-06-07 21:19:24.274851] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:01.763 [2024-06-07 21:19:24.274877] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:01.763 [2024-06-07 21:19:24.274908] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:01.763 [2024-06-07 21:19:24.274916] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:01.763 [2024-06-07 21:19:24.274950] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:01.763 [2024-06-07 21:19:24.274958] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:01.763 [2024-06-07 21:19:24.274995] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:01.763 21:19:24 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:02.021 [2024-06-07 21:19:24.473624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:02.021 BaseBdev1 00:23:02.021 21:19:24 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:02.021 21:19:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:02.021 21:19:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:02.021 21:19:24 -- common/autotest_common.sh@889 -- # local i 00:23:02.021 21:19:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:02.021 21:19:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:02.021 21:19:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:02.280 21:19:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:02.280 [ 00:23:02.280 { 00:23:02.280 "name": "BaseBdev1", 00:23:02.280 "aliases": [ 00:23:02.280 "73f6fc71-66a8-4bc1-9e98-2c38c90b8d8c" 00:23:02.280 ], 00:23:02.280 "product_name": "Malloc disk", 00:23:02.280 "block_size": 512, 00:23:02.280 "num_blocks": 65536, 00:23:02.280 "uuid": "73f6fc71-66a8-4bc1-9e98-2c38c90b8d8c", 00:23:02.280 "assigned_rate_limits": { 00:23:02.280 "rw_ios_per_sec": 0, 00:23:02.280 "rw_mbytes_per_sec": 0, 00:23:02.280 "r_mbytes_per_sec": 0, 00:23:02.280 "w_mbytes_per_sec": 0 00:23:02.280 }, 00:23:02.280 "claimed": true, 00:23:02.280 "claim_type": "exclusive_write", 00:23:02.280 "zoned": false, 00:23:02.280 "supported_io_types": { 00:23:02.280 "read": true, 00:23:02.280 "write": true, 00:23:02.280 "unmap": true, 00:23:02.280 "write_zeroes": true, 00:23:02.280 "flush": true, 00:23:02.280 "reset": true, 00:23:02.280 "compare": false, 00:23:02.280 "compare_and_write": false, 00:23:02.280 "abort": true, 00:23:02.280 "nvme_admin": false, 00:23:02.280 "nvme_io": false 00:23:02.280 }, 00:23:02.280 "memory_domains": [ 00:23:02.280 { 00:23:02.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.280 "dma_device_type": 2 00:23:02.280 } 00:23:02.280 ], 00:23:02.280 "driver_specific": {} 00:23:02.280 } 00:23:02.280 ] 00:23:02.280 21:19:24 -- common/autotest_common.sh@895 -- # return 0 00:23:02.280 21:19:24 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:02.280 21:19:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:02.280 21:19:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:02.280 21:19:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:02.280 21:19:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:02.280 21:19:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:02.280 21:19:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:02.280 21:19:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:02.280 21:19:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:02.280 21:19:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:02.280 21:19:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.280 21:19:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.538 21:19:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:02.538 "name": "Existed_Raid", 00:23:02.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.538 "strip_size_kb": 64, 00:23:02.538 "state": "configuring", 00:23:02.538 "raid_level": "raid5f", 00:23:02.538 "superblock": false, 00:23:02.538 "num_base_bdevs": 4, 00:23:02.538 "num_base_bdevs_discovered": 1, 00:23:02.538 "num_base_bdevs_operational": 4, 00:23:02.538 "base_bdevs_list": [ 00:23:02.538 { 00:23:02.538 "name": "BaseBdev1", 00:23:02.538 "uuid": "73f6fc71-66a8-4bc1-9e98-2c38c90b8d8c", 00:23:02.538 "is_configured": true, 00:23:02.538 "data_offset": 0, 00:23:02.538 "data_size": 65536 00:23:02.538 }, 00:23:02.538 { 00:23:02.538 "name": "BaseBdev2", 00:23:02.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.538 "is_configured": false, 00:23:02.538 "data_offset": 0, 00:23:02.538 "data_size": 0 00:23:02.538 }, 00:23:02.538 { 00:23:02.538 "name": "BaseBdev3", 00:23:02.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.538 "is_configured": false, 00:23:02.538 "data_offset": 0, 00:23:02.538 "data_size": 0 00:23:02.538 }, 00:23:02.538 { 00:23:02.538 "name": "BaseBdev4", 00:23:02.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.538 "is_configured": false, 00:23:02.538 "data_offset": 0, 00:23:02.538 "data_size": 0 00:23:02.538 } 00:23:02.538 ] 00:23:02.538 }' 00:23:02.538 21:19:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:02.538 21:19:25 -- common/autotest_common.sh@10 -- # set +x 00:23:03.472 21:19:25 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:03.472 [2024-06-07 21:19:26.098071] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:03.472 [2024-06-07 21:19:26.098156] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:03.472 21:19:26 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:23:03.472 21:19:26 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:03.730 [2024-06-07 21:19:26.294165] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:03.730 [2024-06-07 21:19:26.296038] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:03.730 [2024-06-07 21:19:26.296129] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:03.730 [2024-06-07 21:19:26.296156] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:03.730 [2024-06-07 21:19:26.296179] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:03.730 [2024-06-07 21:19:26.296188] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:03.730 [2024-06-07 21:19:26.296203] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:03.731 21:19:26 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:03.731 21:19:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:03.731 21:19:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:03.731 21:19:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:03.731 21:19:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:03.731 21:19:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:03.731 21:19:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:03.731 21:19:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:03.731 21:19:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:03.731 21:19:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:03.731 21:19:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:03.731 21:19:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:03.731 21:19:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.731 21:19:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.989 21:19:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:03.989 "name": "Existed_Raid", 00:23:03.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.989 "strip_size_kb": 64, 00:23:03.989 "state": "configuring", 00:23:03.989 "raid_level": "raid5f", 00:23:03.989 "superblock": false, 00:23:03.989 "num_base_bdevs": 4, 00:23:03.989 "num_base_bdevs_discovered": 1, 00:23:03.989 "num_base_bdevs_operational": 4, 00:23:03.989 "base_bdevs_list": [ 00:23:03.989 { 00:23:03.989 "name": "BaseBdev1", 00:23:03.989 "uuid": "73f6fc71-66a8-4bc1-9e98-2c38c90b8d8c", 00:23:03.989 "is_configured": true, 00:23:03.989 "data_offset": 0, 00:23:03.989 "data_size": 65536 00:23:03.989 }, 00:23:03.989 { 00:23:03.989 "name": "BaseBdev2", 00:23:03.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.989 "is_configured": false, 00:23:03.989 "data_offset": 0, 00:23:03.989 "data_size": 0 00:23:03.989 }, 00:23:03.989 { 00:23:03.989 "name": "BaseBdev3", 00:23:03.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.989 "is_configured": false, 00:23:03.989 "data_offset": 0, 00:23:03.989 "data_size": 0 00:23:03.989 }, 00:23:03.989 { 00:23:03.989 "name": "BaseBdev4", 00:23:03.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.989 "is_configured": false, 00:23:03.989 "data_offset": 0, 00:23:03.989 "data_size": 0 00:23:03.989 } 00:23:03.989 ] 00:23:03.989 }' 00:23:03.989 21:19:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:03.989 21:19:26 -- common/autotest_common.sh@10 -- # set +x 00:23:04.556 21:19:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:04.815 [2024-06-07 21:19:27.452814] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:04.815 BaseBdev2 00:23:04.815 21:19:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:04.815 21:19:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:04.815 21:19:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:04.815 21:19:27 -- common/autotest_common.sh@889 -- # local i 00:23:04.815 21:19:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:04.815 21:19:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:04.815 21:19:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:05.073 21:19:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:05.332 [ 00:23:05.332 { 00:23:05.332 "name": "BaseBdev2", 00:23:05.332 "aliases": [ 00:23:05.332 "1f17e3ee-bfe9-496e-9c66-dc168b84e3c1" 00:23:05.332 ], 00:23:05.332 "product_name": "Malloc disk", 00:23:05.332 "block_size": 512, 00:23:05.332 "num_blocks": 65536, 00:23:05.332 "uuid": "1f17e3ee-bfe9-496e-9c66-dc168b84e3c1", 00:23:05.332 "assigned_rate_limits": { 00:23:05.332 "rw_ios_per_sec": 0, 00:23:05.332 "rw_mbytes_per_sec": 0, 00:23:05.332 "r_mbytes_per_sec": 0, 00:23:05.332 "w_mbytes_per_sec": 0 00:23:05.332 }, 00:23:05.332 "claimed": true, 00:23:05.332 "claim_type": "exclusive_write", 00:23:05.332 "zoned": false, 00:23:05.332 "supported_io_types": { 00:23:05.332 "read": true, 00:23:05.332 "write": true, 00:23:05.332 "unmap": true, 00:23:05.332 "write_zeroes": true, 00:23:05.332 "flush": true, 00:23:05.332 "reset": true, 00:23:05.332 "compare": false, 00:23:05.332 "compare_and_write": false, 00:23:05.332 "abort": true, 00:23:05.332 "nvme_admin": false, 00:23:05.332 "nvme_io": false 00:23:05.332 }, 00:23:05.332 "memory_domains": [ 00:23:05.332 { 00:23:05.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.332 "dma_device_type": 2 00:23:05.332 } 00:23:05.332 ], 00:23:05.332 "driver_specific": {} 00:23:05.332 } 00:23:05.332 ] 00:23:05.332 21:19:27 -- common/autotest_common.sh@895 -- # return 0 00:23:05.332 21:19:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:05.332 21:19:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:05.332 21:19:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:05.332 21:19:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:05.332 21:19:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:05.332 21:19:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:05.332 21:19:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:05.332 21:19:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:05.332 21:19:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:05.332 21:19:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:05.332 21:19:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:05.332 21:19:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:05.332 21:19:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.332 21:19:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:05.591 21:19:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:05.591 "name": "Existed_Raid", 00:23:05.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.591 "strip_size_kb": 64, 00:23:05.591 "state": "configuring", 00:23:05.591 "raid_level": "raid5f", 00:23:05.591 "superblock": false, 00:23:05.591 "num_base_bdevs": 4, 00:23:05.591 "num_base_bdevs_discovered": 2, 00:23:05.591 "num_base_bdevs_operational": 4, 00:23:05.591 "base_bdevs_list": [ 00:23:05.591 { 00:23:05.591 "name": "BaseBdev1", 00:23:05.591 "uuid": "73f6fc71-66a8-4bc1-9e98-2c38c90b8d8c", 00:23:05.591 "is_configured": true, 00:23:05.591 "data_offset": 0, 00:23:05.591 "data_size": 65536 00:23:05.591 }, 00:23:05.591 { 00:23:05.591 "name": "BaseBdev2", 00:23:05.591 "uuid": "1f17e3ee-bfe9-496e-9c66-dc168b84e3c1", 00:23:05.591 "is_configured": true, 00:23:05.591 "data_offset": 0, 00:23:05.591 "data_size": 65536 00:23:05.591 }, 00:23:05.591 { 00:23:05.591 "name": "BaseBdev3", 00:23:05.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.591 "is_configured": false, 00:23:05.591 "data_offset": 0, 00:23:05.591 "data_size": 0 00:23:05.591 }, 00:23:05.591 { 00:23:05.591 "name": "BaseBdev4", 00:23:05.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.591 "is_configured": false, 00:23:05.591 "data_offset": 0, 00:23:05.591 "data_size": 0 00:23:05.591 } 00:23:05.591 ] 00:23:05.591 }' 00:23:05.591 21:19:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:05.591 21:19:28 -- common/autotest_common.sh@10 -- # set +x 00:23:06.525 21:19:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:06.525 [2024-06-07 21:19:29.062255] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:06.525 BaseBdev3 00:23:06.525 21:19:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:06.525 21:19:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:06.525 21:19:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:06.525 21:19:29 -- common/autotest_common.sh@889 -- # local i 00:23:06.525 21:19:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:06.525 21:19:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:06.525 21:19:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:06.785 21:19:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:07.057 [ 00:23:07.057 { 00:23:07.057 "name": "BaseBdev3", 00:23:07.057 "aliases": [ 00:23:07.057 "2a5ce1e4-711e-4aff-bd2b-3541b1435711" 00:23:07.057 ], 00:23:07.057 "product_name": "Malloc disk", 00:23:07.057 "block_size": 512, 00:23:07.057 "num_blocks": 65536, 00:23:07.057 "uuid": "2a5ce1e4-711e-4aff-bd2b-3541b1435711", 00:23:07.057 "assigned_rate_limits": { 00:23:07.057 "rw_ios_per_sec": 0, 00:23:07.057 "rw_mbytes_per_sec": 0, 00:23:07.057 "r_mbytes_per_sec": 0, 00:23:07.057 "w_mbytes_per_sec": 0 00:23:07.057 }, 00:23:07.057 "claimed": true, 00:23:07.057 "claim_type": "exclusive_write", 00:23:07.057 "zoned": false, 00:23:07.057 "supported_io_types": { 00:23:07.057 "read": true, 00:23:07.057 "write": true, 00:23:07.057 "unmap": true, 00:23:07.057 "write_zeroes": true, 00:23:07.057 "flush": true, 00:23:07.057 "reset": true, 00:23:07.057 "compare": false, 00:23:07.057 "compare_and_write": false, 00:23:07.057 "abort": true, 00:23:07.057 "nvme_admin": false, 00:23:07.057 "nvme_io": false 00:23:07.057 }, 00:23:07.057 "memory_domains": [ 00:23:07.057 { 00:23:07.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.057 "dma_device_type": 2 00:23:07.057 } 00:23:07.057 ], 00:23:07.057 "driver_specific": {} 00:23:07.057 } 00:23:07.057 ] 00:23:07.057 21:19:29 -- common/autotest_common.sh@895 -- # return 0 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:07.057 "name": "Existed_Raid", 00:23:07.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.057 "strip_size_kb": 64, 00:23:07.057 "state": "configuring", 00:23:07.057 "raid_level": "raid5f", 00:23:07.057 "superblock": false, 00:23:07.057 "num_base_bdevs": 4, 00:23:07.057 "num_base_bdevs_discovered": 3, 00:23:07.057 "num_base_bdevs_operational": 4, 00:23:07.057 "base_bdevs_list": [ 00:23:07.057 { 00:23:07.057 "name": "BaseBdev1", 00:23:07.057 "uuid": "73f6fc71-66a8-4bc1-9e98-2c38c90b8d8c", 00:23:07.057 "is_configured": true, 00:23:07.057 "data_offset": 0, 00:23:07.057 "data_size": 65536 00:23:07.057 }, 00:23:07.057 { 00:23:07.057 "name": "BaseBdev2", 00:23:07.057 "uuid": "1f17e3ee-bfe9-496e-9c66-dc168b84e3c1", 00:23:07.057 "is_configured": true, 00:23:07.057 "data_offset": 0, 00:23:07.057 "data_size": 65536 00:23:07.057 }, 00:23:07.057 { 00:23:07.057 "name": "BaseBdev3", 00:23:07.057 "uuid": "2a5ce1e4-711e-4aff-bd2b-3541b1435711", 00:23:07.057 "is_configured": true, 00:23:07.057 "data_offset": 0, 00:23:07.057 "data_size": 65536 00:23:07.057 }, 00:23:07.057 { 00:23:07.057 "name": "BaseBdev4", 00:23:07.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.057 "is_configured": false, 00:23:07.057 "data_offset": 0, 00:23:07.057 "data_size": 0 00:23:07.057 } 00:23:07.057 ] 00:23:07.057 }' 00:23:07.057 21:19:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:07.057 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:23:07.990 21:19:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:07.990 [2024-06-07 21:19:30.627458] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:07.990 [2024-06-07 21:19:30.627582] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:23:07.990 [2024-06-07 21:19:30.627596] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:07.990 [2024-06-07 21:19:30.627750] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:23:07.990 [2024-06-07 21:19:30.628643] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:23:07.990 [2024-06-07 21:19:30.628668] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:23:07.990 [2024-06-07 21:19:30.628996] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:07.990 BaseBdev4 00:23:07.990 21:19:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:23:07.990 21:19:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:23:07.990 21:19:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:07.990 21:19:30 -- common/autotest_common.sh@889 -- # local i 00:23:07.990 21:19:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:07.990 21:19:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:07.991 21:19:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:08.247 21:19:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:08.505 [ 00:23:08.505 { 00:23:08.505 "name": "BaseBdev4", 00:23:08.505 "aliases": [ 00:23:08.505 "b84d3fb2-d98a-4057-bef8-02ca40efc808" 00:23:08.505 ], 00:23:08.505 "product_name": "Malloc disk", 00:23:08.505 "block_size": 512, 00:23:08.505 "num_blocks": 65536, 00:23:08.505 "uuid": "b84d3fb2-d98a-4057-bef8-02ca40efc808", 00:23:08.505 "assigned_rate_limits": { 00:23:08.505 "rw_ios_per_sec": 0, 00:23:08.505 "rw_mbytes_per_sec": 0, 00:23:08.505 "r_mbytes_per_sec": 0, 00:23:08.505 "w_mbytes_per_sec": 0 00:23:08.505 }, 00:23:08.505 "claimed": true, 00:23:08.505 "claim_type": "exclusive_write", 00:23:08.505 "zoned": false, 00:23:08.505 "supported_io_types": { 00:23:08.505 "read": true, 00:23:08.505 "write": true, 00:23:08.505 "unmap": true, 00:23:08.505 "write_zeroes": true, 00:23:08.505 "flush": true, 00:23:08.505 "reset": true, 00:23:08.505 "compare": false, 00:23:08.505 "compare_and_write": false, 00:23:08.505 "abort": true, 00:23:08.505 "nvme_admin": false, 00:23:08.505 "nvme_io": false 00:23:08.505 }, 00:23:08.505 "memory_domains": [ 00:23:08.505 { 00:23:08.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.505 "dma_device_type": 2 00:23:08.505 } 00:23:08.505 ], 00:23:08.505 "driver_specific": {} 00:23:08.505 } 00:23:08.505 ] 00:23:08.505 21:19:31 -- common/autotest_common.sh@895 -- # return 0 00:23:08.505 21:19:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:08.505 21:19:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:08.505 21:19:31 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:08.505 21:19:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:08.505 21:19:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:08.505 21:19:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:08.505 21:19:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:08.505 21:19:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:08.505 21:19:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:08.505 21:19:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:08.505 21:19:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:08.505 21:19:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:08.505 21:19:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.505 21:19:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.762 21:19:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:08.762 "name": "Existed_Raid", 00:23:08.762 "uuid": "652b59d4-1e50-4649-b129-aa417bb4cff8", 00:23:08.762 "strip_size_kb": 64, 00:23:08.762 "state": "online", 00:23:08.762 "raid_level": "raid5f", 00:23:08.762 "superblock": false, 00:23:08.762 "num_base_bdevs": 4, 00:23:08.762 "num_base_bdevs_discovered": 4, 00:23:08.762 "num_base_bdevs_operational": 4, 00:23:08.762 "base_bdevs_list": [ 00:23:08.762 { 00:23:08.762 "name": "BaseBdev1", 00:23:08.762 "uuid": "73f6fc71-66a8-4bc1-9e98-2c38c90b8d8c", 00:23:08.762 "is_configured": true, 00:23:08.762 "data_offset": 0, 00:23:08.762 "data_size": 65536 00:23:08.762 }, 00:23:08.762 { 00:23:08.762 "name": "BaseBdev2", 00:23:08.762 "uuid": "1f17e3ee-bfe9-496e-9c66-dc168b84e3c1", 00:23:08.762 "is_configured": true, 00:23:08.762 "data_offset": 0, 00:23:08.762 "data_size": 65536 00:23:08.762 }, 00:23:08.762 { 00:23:08.762 "name": "BaseBdev3", 00:23:08.762 "uuid": "2a5ce1e4-711e-4aff-bd2b-3541b1435711", 00:23:08.762 "is_configured": true, 00:23:08.762 "data_offset": 0, 00:23:08.762 "data_size": 65536 00:23:08.762 }, 00:23:08.762 { 00:23:08.762 "name": "BaseBdev4", 00:23:08.762 "uuid": "b84d3fb2-d98a-4057-bef8-02ca40efc808", 00:23:08.762 "is_configured": true, 00:23:08.762 "data_offset": 0, 00:23:08.762 "data_size": 65536 00:23:08.762 } 00:23:08.762 ] 00:23:08.762 }' 00:23:08.762 21:19:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:08.763 21:19:31 -- common/autotest_common.sh@10 -- # set +x 00:23:09.328 21:19:31 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:09.586 [2024-06-07 21:19:32.185401] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.586 21:19:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.843 21:19:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:09.843 "name": "Existed_Raid", 00:23:09.843 "uuid": "652b59d4-1e50-4649-b129-aa417bb4cff8", 00:23:09.843 "strip_size_kb": 64, 00:23:09.843 "state": "online", 00:23:09.843 "raid_level": "raid5f", 00:23:09.843 "superblock": false, 00:23:09.843 "num_base_bdevs": 4, 00:23:09.843 "num_base_bdevs_discovered": 3, 00:23:09.843 "num_base_bdevs_operational": 3, 00:23:09.843 "base_bdevs_list": [ 00:23:09.843 { 00:23:09.843 "name": null, 00:23:09.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.843 "is_configured": false, 00:23:09.843 "data_offset": 0, 00:23:09.843 "data_size": 65536 00:23:09.843 }, 00:23:09.843 { 00:23:09.843 "name": "BaseBdev2", 00:23:09.843 "uuid": "1f17e3ee-bfe9-496e-9c66-dc168b84e3c1", 00:23:09.843 "is_configured": true, 00:23:09.843 "data_offset": 0, 00:23:09.843 "data_size": 65536 00:23:09.843 }, 00:23:09.843 { 00:23:09.843 "name": "BaseBdev3", 00:23:09.843 "uuid": "2a5ce1e4-711e-4aff-bd2b-3541b1435711", 00:23:09.843 "is_configured": true, 00:23:09.843 "data_offset": 0, 00:23:09.843 "data_size": 65536 00:23:09.843 }, 00:23:09.843 { 00:23:09.843 "name": "BaseBdev4", 00:23:09.843 "uuid": "b84d3fb2-d98a-4057-bef8-02ca40efc808", 00:23:09.843 "is_configured": true, 00:23:09.843 "data_offset": 0, 00:23:09.843 "data_size": 65536 00:23:09.843 } 00:23:09.843 ] 00:23:09.843 }' 00:23:09.843 21:19:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:09.843 21:19:32 -- common/autotest_common.sh@10 -- # set +x 00:23:10.775 21:19:33 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:10.775 21:19:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:10.775 21:19:33 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.775 21:19:33 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:10.775 21:19:33 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:10.775 21:19:33 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:10.775 21:19:33 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:11.032 [2024-06-07 21:19:33.635891] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:11.032 [2024-06-07 21:19:33.635925] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:11.032 [2024-06-07 21:19:33.636042] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:11.032 21:19:33 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:11.032 21:19:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:11.032 21:19:33 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.032 21:19:33 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:11.290 21:19:33 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:11.290 21:19:33 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:11.290 21:19:33 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:11.547 [2024-06-07 21:19:34.150318] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:11.547 21:19:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:11.547 21:19:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:11.547 21:19:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.547 21:19:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:11.805 21:19:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:11.805 21:19:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:11.805 21:19:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:12.063 [2024-06-07 21:19:34.608811] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:12.063 [2024-06-07 21:19:34.608882] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:23:12.063 21:19:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:12.063 21:19:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:12.063 21:19:34 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.063 21:19:34 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:12.321 21:19:34 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:12.321 21:19:34 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:12.321 21:19:34 -- bdev/bdev_raid.sh@287 -- # killprocess 143402 00:23:12.321 21:19:34 -- common/autotest_common.sh@926 -- # '[' -z 143402 ']' 00:23:12.321 21:19:34 -- common/autotest_common.sh@930 -- # kill -0 143402 00:23:12.321 21:19:34 -- common/autotest_common.sh@931 -- # uname 00:23:12.321 21:19:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:12.321 21:19:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 143402 00:23:12.321 killing process with pid 143402 00:23:12.321 21:19:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:12.321 21:19:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:12.321 21:19:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 143402' 00:23:12.321 21:19:34 -- common/autotest_common.sh@945 -- # kill 143402 00:23:12.321 21:19:34 -- common/autotest_common.sh@950 -- # wait 143402 00:23:12.321 [2024-06-07 21:19:34.854292] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:12.321 [2024-06-07 21:19:34.854376] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:12.580 ************************************ 00:23:12.580 END TEST raid5f_state_function_test 00:23:12.580 ************************************ 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:12.580 00:23:12.580 real 0m13.339s 00:23:12.580 user 0m25.139s 00:23:12.580 sys 0m1.395s 00:23:12.580 21:19:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:12.580 21:19:35 -- common/autotest_common.sh@10 -- # set +x 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:23:12.580 21:19:35 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:12.580 21:19:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:12.580 21:19:35 -- common/autotest_common.sh@10 -- # set +x 00:23:12.580 ************************************ 00:23:12.580 START TEST raid5f_state_function_test_sb 00:23:12.580 ************************************ 00:23:12.580 21:19:35 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 true 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@226 -- # raid_pid=143847 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 143847' 00:23:12.580 Process raid pid: 143847 00:23:12.580 21:19:35 -- bdev/bdev_raid.sh@228 -- # waitforlisten 143847 /var/tmp/spdk-raid.sock 00:23:12.580 21:19:35 -- common/autotest_common.sh@819 -- # '[' -z 143847 ']' 00:23:12.580 21:19:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:12.580 21:19:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:12.580 21:19:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:12.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:12.580 21:19:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:12.580 21:19:35 -- common/autotest_common.sh@10 -- # set +x 00:23:12.580 [2024-06-07 21:19:35.193976] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:12.580 [2024-06-07 21:19:35.194204] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.839 [2024-06-07 21:19:35.355392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.839 [2024-06-07 21:19:35.438415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.839 [2024-06-07 21:19:35.493154] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:13.774 21:19:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:13.774 21:19:36 -- common/autotest_common.sh@852 -- # return 0 00:23:13.774 21:19:36 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:13.774 [2024-06-07 21:19:36.317000] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:13.774 [2024-06-07 21:19:36.317061] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:13.774 [2024-06-07 21:19:36.317089] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:13.774 [2024-06-07 21:19:36.317110] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:13.774 [2024-06-07 21:19:36.317116] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:13.774 [2024-06-07 21:19:36.317148] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:13.774 [2024-06-07 21:19:36.317156] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:13.774 [2024-06-07 21:19:36.317175] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:13.774 21:19:36 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:13.774 21:19:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:13.774 21:19:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:13.774 21:19:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:13.774 21:19:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:13.774 21:19:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:13.774 21:19:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:13.774 21:19:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:13.774 21:19:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:13.774 21:19:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:13.774 21:19:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.774 21:19:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:14.033 21:19:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:14.033 "name": "Existed_Raid", 00:23:14.033 "uuid": "a98f7202-f7d5-416d-9ee8-36b614f48d48", 00:23:14.033 "strip_size_kb": 64, 00:23:14.033 "state": "configuring", 00:23:14.033 "raid_level": "raid5f", 00:23:14.033 "superblock": true, 00:23:14.033 "num_base_bdevs": 4, 00:23:14.033 "num_base_bdevs_discovered": 0, 00:23:14.033 "num_base_bdevs_operational": 4, 00:23:14.033 "base_bdevs_list": [ 00:23:14.033 { 00:23:14.033 "name": "BaseBdev1", 00:23:14.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.033 "is_configured": false, 00:23:14.033 "data_offset": 0, 00:23:14.033 "data_size": 0 00:23:14.033 }, 00:23:14.033 { 00:23:14.033 "name": "BaseBdev2", 00:23:14.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.033 "is_configured": false, 00:23:14.033 "data_offset": 0, 00:23:14.033 "data_size": 0 00:23:14.033 }, 00:23:14.033 { 00:23:14.033 "name": "BaseBdev3", 00:23:14.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.033 "is_configured": false, 00:23:14.033 "data_offset": 0, 00:23:14.033 "data_size": 0 00:23:14.033 }, 00:23:14.033 { 00:23:14.033 "name": "BaseBdev4", 00:23:14.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.033 "is_configured": false, 00:23:14.033 "data_offset": 0, 00:23:14.033 "data_size": 0 00:23:14.033 } 00:23:14.033 ] 00:23:14.033 }' 00:23:14.033 21:19:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:14.033 21:19:36 -- common/autotest_common.sh@10 -- # set +x 00:23:14.599 21:19:37 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:14.857 [2024-06-07 21:19:37.433111] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:14.858 [2024-06-07 21:19:37.433166] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:14.858 21:19:37 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:15.116 [2024-06-07 21:19:37.621230] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:15.116 [2024-06-07 21:19:37.621280] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:15.116 [2024-06-07 21:19:37.621306] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:15.116 [2024-06-07 21:19:37.621336] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:15.116 [2024-06-07 21:19:37.621344] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:15.116 [2024-06-07 21:19:37.621377] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:15.116 [2024-06-07 21:19:37.621384] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:15.116 [2024-06-07 21:19:37.621403] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:15.116 21:19:37 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:15.374 [2024-06-07 21:19:37.871906] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:15.374 BaseBdev1 00:23:15.374 21:19:37 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:15.374 21:19:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:15.374 21:19:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:15.374 21:19:37 -- common/autotest_common.sh@889 -- # local i 00:23:15.374 21:19:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:15.374 21:19:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:15.374 21:19:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:15.633 21:19:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:15.633 [ 00:23:15.633 { 00:23:15.633 "name": "BaseBdev1", 00:23:15.633 "aliases": [ 00:23:15.633 "04cb0710-09e0-424e-acc7-4017a774590e" 00:23:15.633 ], 00:23:15.633 "product_name": "Malloc disk", 00:23:15.633 "block_size": 512, 00:23:15.633 "num_blocks": 65536, 00:23:15.633 "uuid": "04cb0710-09e0-424e-acc7-4017a774590e", 00:23:15.633 "assigned_rate_limits": { 00:23:15.633 "rw_ios_per_sec": 0, 00:23:15.633 "rw_mbytes_per_sec": 0, 00:23:15.633 "r_mbytes_per_sec": 0, 00:23:15.633 "w_mbytes_per_sec": 0 00:23:15.633 }, 00:23:15.633 "claimed": true, 00:23:15.633 "claim_type": "exclusive_write", 00:23:15.633 "zoned": false, 00:23:15.633 "supported_io_types": { 00:23:15.633 "read": true, 00:23:15.633 "write": true, 00:23:15.633 "unmap": true, 00:23:15.633 "write_zeroes": true, 00:23:15.633 "flush": true, 00:23:15.633 "reset": true, 00:23:15.633 "compare": false, 00:23:15.633 "compare_and_write": false, 00:23:15.633 "abort": true, 00:23:15.633 "nvme_admin": false, 00:23:15.633 "nvme_io": false 00:23:15.633 }, 00:23:15.633 "memory_domains": [ 00:23:15.633 { 00:23:15.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.633 "dma_device_type": 2 00:23:15.633 } 00:23:15.633 ], 00:23:15.633 "driver_specific": {} 00:23:15.633 } 00:23:15.633 ] 00:23:15.633 21:19:38 -- common/autotest_common.sh@895 -- # return 0 00:23:15.633 21:19:38 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:15.633 21:19:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:15.633 21:19:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:15.633 21:19:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:15.633 21:19:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:15.633 21:19:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:15.633 21:19:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:15.633 21:19:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:15.633 21:19:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:15.633 21:19:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:15.633 21:19:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.633 21:19:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:15.891 21:19:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:15.891 "name": "Existed_Raid", 00:23:15.891 "uuid": "d850f559-9b5b-4535-b427-aa6ef59e0918", 00:23:15.891 "strip_size_kb": 64, 00:23:15.891 "state": "configuring", 00:23:15.891 "raid_level": "raid5f", 00:23:15.891 "superblock": true, 00:23:15.891 "num_base_bdevs": 4, 00:23:15.891 "num_base_bdevs_discovered": 1, 00:23:15.891 "num_base_bdevs_operational": 4, 00:23:15.891 "base_bdevs_list": [ 00:23:15.891 { 00:23:15.891 "name": "BaseBdev1", 00:23:15.891 "uuid": "04cb0710-09e0-424e-acc7-4017a774590e", 00:23:15.891 "is_configured": true, 00:23:15.891 "data_offset": 2048, 00:23:15.891 "data_size": 63488 00:23:15.891 }, 00:23:15.891 { 00:23:15.891 "name": "BaseBdev2", 00:23:15.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.891 "is_configured": false, 00:23:15.891 "data_offset": 0, 00:23:15.891 "data_size": 0 00:23:15.891 }, 00:23:15.891 { 00:23:15.891 "name": "BaseBdev3", 00:23:15.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.891 "is_configured": false, 00:23:15.891 "data_offset": 0, 00:23:15.891 "data_size": 0 00:23:15.891 }, 00:23:15.891 { 00:23:15.891 "name": "BaseBdev4", 00:23:15.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.891 "is_configured": false, 00:23:15.891 "data_offset": 0, 00:23:15.891 "data_size": 0 00:23:15.891 } 00:23:15.891 ] 00:23:15.891 }' 00:23:15.891 21:19:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:15.891 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:23:16.827 21:19:39 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:16.827 [2024-06-07 21:19:39.384406] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:16.827 [2024-06-07 21:19:39.384488] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:16.827 21:19:39 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:23:16.827 21:19:39 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:17.085 21:19:39 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:17.344 BaseBdev1 00:23:17.344 21:19:39 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:23:17.344 21:19:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:17.344 21:19:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:17.344 21:19:39 -- common/autotest_common.sh@889 -- # local i 00:23:17.344 21:19:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:17.344 21:19:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:17.344 21:19:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:17.603 21:19:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:17.603 [ 00:23:17.603 { 00:23:17.603 "name": "BaseBdev1", 00:23:17.603 "aliases": [ 00:23:17.603 "fecc811b-810d-4189-b146-28477740534f" 00:23:17.603 ], 00:23:17.603 "product_name": "Malloc disk", 00:23:17.603 "block_size": 512, 00:23:17.603 "num_blocks": 65536, 00:23:17.603 "uuid": "fecc811b-810d-4189-b146-28477740534f", 00:23:17.603 "assigned_rate_limits": { 00:23:17.603 "rw_ios_per_sec": 0, 00:23:17.603 "rw_mbytes_per_sec": 0, 00:23:17.603 "r_mbytes_per_sec": 0, 00:23:17.603 "w_mbytes_per_sec": 0 00:23:17.603 }, 00:23:17.603 "claimed": false, 00:23:17.603 "zoned": false, 00:23:17.603 "supported_io_types": { 00:23:17.603 "read": true, 00:23:17.603 "write": true, 00:23:17.603 "unmap": true, 00:23:17.603 "write_zeroes": true, 00:23:17.603 "flush": true, 00:23:17.603 "reset": true, 00:23:17.603 "compare": false, 00:23:17.603 "compare_and_write": false, 00:23:17.603 "abort": true, 00:23:17.603 "nvme_admin": false, 00:23:17.603 "nvme_io": false 00:23:17.603 }, 00:23:17.603 "memory_domains": [ 00:23:17.603 { 00:23:17.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.603 "dma_device_type": 2 00:23:17.603 } 00:23:17.603 ], 00:23:17.603 "driver_specific": {} 00:23:17.603 } 00:23:17.603 ] 00:23:17.603 21:19:40 -- common/autotest_common.sh@895 -- # return 0 00:23:17.603 21:19:40 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:17.862 [2024-06-07 21:19:40.416915] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:17.862 [2024-06-07 21:19:40.418698] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:17.862 [2024-06-07 21:19:40.418765] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:17.862 [2024-06-07 21:19:40.418794] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:17.862 [2024-06-07 21:19:40.418816] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:17.862 [2024-06-07 21:19:40.418825] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:17.862 [2024-06-07 21:19:40.418839] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:17.862 21:19:40 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:17.862 21:19:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:17.862 21:19:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:17.862 21:19:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:17.862 21:19:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:17.862 21:19:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:17.862 21:19:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:17.862 21:19:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:17.862 21:19:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:17.862 21:19:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:17.862 21:19:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:17.862 21:19:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:17.862 21:19:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.862 21:19:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:18.120 21:19:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:18.120 "name": "Existed_Raid", 00:23:18.120 "uuid": "33453dcc-887c-4ae8-865d-9c8be6aa1b0b", 00:23:18.120 "strip_size_kb": 64, 00:23:18.120 "state": "configuring", 00:23:18.120 "raid_level": "raid5f", 00:23:18.120 "superblock": true, 00:23:18.120 "num_base_bdevs": 4, 00:23:18.120 "num_base_bdevs_discovered": 1, 00:23:18.120 "num_base_bdevs_operational": 4, 00:23:18.120 "base_bdevs_list": [ 00:23:18.120 { 00:23:18.120 "name": "BaseBdev1", 00:23:18.120 "uuid": "fecc811b-810d-4189-b146-28477740534f", 00:23:18.120 "is_configured": true, 00:23:18.120 "data_offset": 2048, 00:23:18.120 "data_size": 63488 00:23:18.120 }, 00:23:18.120 { 00:23:18.120 "name": "BaseBdev2", 00:23:18.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.120 "is_configured": false, 00:23:18.120 "data_offset": 0, 00:23:18.120 "data_size": 0 00:23:18.120 }, 00:23:18.120 { 00:23:18.120 "name": "BaseBdev3", 00:23:18.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.120 "is_configured": false, 00:23:18.120 "data_offset": 0, 00:23:18.120 "data_size": 0 00:23:18.120 }, 00:23:18.120 { 00:23:18.120 "name": "BaseBdev4", 00:23:18.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.120 "is_configured": false, 00:23:18.120 "data_offset": 0, 00:23:18.120 "data_size": 0 00:23:18.120 } 00:23:18.120 ] 00:23:18.120 }' 00:23:18.120 21:19:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:18.120 21:19:40 -- common/autotest_common.sh@10 -- # set +x 00:23:18.714 21:19:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:18.972 [2024-06-07 21:19:41.617214] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:18.972 BaseBdev2 00:23:18.972 21:19:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:18.972 21:19:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:18.972 21:19:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:18.972 21:19:41 -- common/autotest_common.sh@889 -- # local i 00:23:18.972 21:19:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:18.972 21:19:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:18.972 21:19:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:19.231 21:19:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:19.490 [ 00:23:19.490 { 00:23:19.490 "name": "BaseBdev2", 00:23:19.490 "aliases": [ 00:23:19.490 "08c68de8-d839-4876-bb4b-a93b0b5e8e4d" 00:23:19.490 ], 00:23:19.490 "product_name": "Malloc disk", 00:23:19.490 "block_size": 512, 00:23:19.490 "num_blocks": 65536, 00:23:19.490 "uuid": "08c68de8-d839-4876-bb4b-a93b0b5e8e4d", 00:23:19.490 "assigned_rate_limits": { 00:23:19.490 "rw_ios_per_sec": 0, 00:23:19.490 "rw_mbytes_per_sec": 0, 00:23:19.490 "r_mbytes_per_sec": 0, 00:23:19.490 "w_mbytes_per_sec": 0 00:23:19.490 }, 00:23:19.490 "claimed": true, 00:23:19.490 "claim_type": "exclusive_write", 00:23:19.490 "zoned": false, 00:23:19.490 "supported_io_types": { 00:23:19.490 "read": true, 00:23:19.490 "write": true, 00:23:19.490 "unmap": true, 00:23:19.490 "write_zeroes": true, 00:23:19.490 "flush": true, 00:23:19.490 "reset": true, 00:23:19.490 "compare": false, 00:23:19.490 "compare_and_write": false, 00:23:19.490 "abort": true, 00:23:19.490 "nvme_admin": false, 00:23:19.490 "nvme_io": false 00:23:19.490 }, 00:23:19.490 "memory_domains": [ 00:23:19.490 { 00:23:19.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.490 "dma_device_type": 2 00:23:19.490 } 00:23:19.490 ], 00:23:19.490 "driver_specific": {} 00:23:19.490 } 00:23:19.490 ] 00:23:19.490 21:19:42 -- common/autotest_common.sh@895 -- # return 0 00:23:19.490 21:19:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:19.490 21:19:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:19.490 21:19:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:19.490 21:19:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:19.490 21:19:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:19.490 21:19:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:19.491 21:19:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:19.491 21:19:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:19.491 21:19:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:19.491 21:19:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:19.491 21:19:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:19.491 21:19:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:19.491 21:19:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.491 21:19:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.748 21:19:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:19.748 "name": "Existed_Raid", 00:23:19.748 "uuid": "33453dcc-887c-4ae8-865d-9c8be6aa1b0b", 00:23:19.748 "strip_size_kb": 64, 00:23:19.748 "state": "configuring", 00:23:19.748 "raid_level": "raid5f", 00:23:19.748 "superblock": true, 00:23:19.749 "num_base_bdevs": 4, 00:23:19.749 "num_base_bdevs_discovered": 2, 00:23:19.749 "num_base_bdevs_operational": 4, 00:23:19.749 "base_bdevs_list": [ 00:23:19.749 { 00:23:19.749 "name": "BaseBdev1", 00:23:19.749 "uuid": "fecc811b-810d-4189-b146-28477740534f", 00:23:19.749 "is_configured": true, 00:23:19.749 "data_offset": 2048, 00:23:19.749 "data_size": 63488 00:23:19.749 }, 00:23:19.749 { 00:23:19.749 "name": "BaseBdev2", 00:23:19.749 "uuid": "08c68de8-d839-4876-bb4b-a93b0b5e8e4d", 00:23:19.749 "is_configured": true, 00:23:19.749 "data_offset": 2048, 00:23:19.749 "data_size": 63488 00:23:19.749 }, 00:23:19.749 { 00:23:19.749 "name": "BaseBdev3", 00:23:19.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.749 "is_configured": false, 00:23:19.749 "data_offset": 0, 00:23:19.749 "data_size": 0 00:23:19.749 }, 00:23:19.749 { 00:23:19.749 "name": "BaseBdev4", 00:23:19.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.749 "is_configured": false, 00:23:19.749 "data_offset": 0, 00:23:19.749 "data_size": 0 00:23:19.749 } 00:23:19.749 ] 00:23:19.749 }' 00:23:19.749 21:19:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:19.749 21:19:42 -- common/autotest_common.sh@10 -- # set +x 00:23:20.316 21:19:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:20.574 [2024-06-07 21:19:43.170500] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:20.574 BaseBdev3 00:23:20.574 21:19:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:20.574 21:19:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:20.574 21:19:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:20.574 21:19:43 -- common/autotest_common.sh@889 -- # local i 00:23:20.574 21:19:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:20.574 21:19:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:20.574 21:19:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:20.832 21:19:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:21.090 [ 00:23:21.090 { 00:23:21.090 "name": "BaseBdev3", 00:23:21.090 "aliases": [ 00:23:21.090 "65a6895a-21f3-4a42-b02b-21545d39f2ab" 00:23:21.090 ], 00:23:21.090 "product_name": "Malloc disk", 00:23:21.090 "block_size": 512, 00:23:21.090 "num_blocks": 65536, 00:23:21.090 "uuid": "65a6895a-21f3-4a42-b02b-21545d39f2ab", 00:23:21.090 "assigned_rate_limits": { 00:23:21.090 "rw_ios_per_sec": 0, 00:23:21.090 "rw_mbytes_per_sec": 0, 00:23:21.090 "r_mbytes_per_sec": 0, 00:23:21.090 "w_mbytes_per_sec": 0 00:23:21.090 }, 00:23:21.090 "claimed": true, 00:23:21.090 "claim_type": "exclusive_write", 00:23:21.090 "zoned": false, 00:23:21.090 "supported_io_types": { 00:23:21.090 "read": true, 00:23:21.090 "write": true, 00:23:21.090 "unmap": true, 00:23:21.090 "write_zeroes": true, 00:23:21.090 "flush": true, 00:23:21.090 "reset": true, 00:23:21.090 "compare": false, 00:23:21.090 "compare_and_write": false, 00:23:21.090 "abort": true, 00:23:21.090 "nvme_admin": false, 00:23:21.090 "nvme_io": false 00:23:21.090 }, 00:23:21.090 "memory_domains": [ 00:23:21.090 { 00:23:21.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.090 "dma_device_type": 2 00:23:21.090 } 00:23:21.090 ], 00:23:21.090 "driver_specific": {} 00:23:21.090 } 00:23:21.090 ] 00:23:21.090 21:19:43 -- common/autotest_common.sh@895 -- # return 0 00:23:21.090 21:19:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:21.090 21:19:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:21.090 21:19:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:21.090 21:19:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:21.090 21:19:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:21.090 21:19:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:21.090 21:19:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:21.090 21:19:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:21.090 21:19:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:21.090 21:19:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:21.090 21:19:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:21.090 21:19:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:21.090 21:19:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.090 21:19:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.348 21:19:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:21.348 "name": "Existed_Raid", 00:23:21.348 "uuid": "33453dcc-887c-4ae8-865d-9c8be6aa1b0b", 00:23:21.348 "strip_size_kb": 64, 00:23:21.348 "state": "configuring", 00:23:21.348 "raid_level": "raid5f", 00:23:21.348 "superblock": true, 00:23:21.348 "num_base_bdevs": 4, 00:23:21.348 "num_base_bdevs_discovered": 3, 00:23:21.348 "num_base_bdevs_operational": 4, 00:23:21.348 "base_bdevs_list": [ 00:23:21.348 { 00:23:21.348 "name": "BaseBdev1", 00:23:21.348 "uuid": "fecc811b-810d-4189-b146-28477740534f", 00:23:21.348 "is_configured": true, 00:23:21.348 "data_offset": 2048, 00:23:21.348 "data_size": 63488 00:23:21.348 }, 00:23:21.348 { 00:23:21.348 "name": "BaseBdev2", 00:23:21.348 "uuid": "08c68de8-d839-4876-bb4b-a93b0b5e8e4d", 00:23:21.348 "is_configured": true, 00:23:21.348 "data_offset": 2048, 00:23:21.348 "data_size": 63488 00:23:21.348 }, 00:23:21.348 { 00:23:21.348 "name": "BaseBdev3", 00:23:21.348 "uuid": "65a6895a-21f3-4a42-b02b-21545d39f2ab", 00:23:21.348 "is_configured": true, 00:23:21.348 "data_offset": 2048, 00:23:21.348 "data_size": 63488 00:23:21.348 }, 00:23:21.348 { 00:23:21.348 "name": "BaseBdev4", 00:23:21.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.348 "is_configured": false, 00:23:21.348 "data_offset": 0, 00:23:21.348 "data_size": 0 00:23:21.348 } 00:23:21.348 ] 00:23:21.348 }' 00:23:21.348 21:19:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:21.348 21:19:43 -- common/autotest_common.sh@10 -- # set +x 00:23:21.915 21:19:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:21.915 [2024-06-07 21:19:44.587839] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:21.915 BaseBdev4 00:23:21.915 [2024-06-07 21:19:44.588197] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:23:21.915 [2024-06-07 21:19:44.588244] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:21.915 [2024-06-07 21:19:44.588406] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:23:21.915 [2024-06-07 21:19:44.589334] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:23:21.915 [2024-06-07 21:19:44.589372] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:23:21.915 [2024-06-07 21:19:44.589541] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.174 21:19:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:23:22.174 21:19:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:23:22.174 21:19:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:22.174 21:19:44 -- common/autotest_common.sh@889 -- # local i 00:23:22.174 21:19:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:22.174 21:19:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:22.174 21:19:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:22.432 21:19:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:22.432 [ 00:23:22.432 { 00:23:22.432 "name": "BaseBdev4", 00:23:22.432 "aliases": [ 00:23:22.432 "b8764254-e2ad-47b2-9b6d-ade305a675ee" 00:23:22.432 ], 00:23:22.432 "product_name": "Malloc disk", 00:23:22.432 "block_size": 512, 00:23:22.432 "num_blocks": 65536, 00:23:22.432 "uuid": "b8764254-e2ad-47b2-9b6d-ade305a675ee", 00:23:22.432 "assigned_rate_limits": { 00:23:22.432 "rw_ios_per_sec": 0, 00:23:22.432 "rw_mbytes_per_sec": 0, 00:23:22.432 "r_mbytes_per_sec": 0, 00:23:22.433 "w_mbytes_per_sec": 0 00:23:22.433 }, 00:23:22.433 "claimed": true, 00:23:22.433 "claim_type": "exclusive_write", 00:23:22.433 "zoned": false, 00:23:22.433 "supported_io_types": { 00:23:22.433 "read": true, 00:23:22.433 "write": true, 00:23:22.433 "unmap": true, 00:23:22.433 "write_zeroes": true, 00:23:22.433 "flush": true, 00:23:22.433 "reset": true, 00:23:22.433 "compare": false, 00:23:22.433 "compare_and_write": false, 00:23:22.433 "abort": true, 00:23:22.433 "nvme_admin": false, 00:23:22.433 "nvme_io": false 00:23:22.433 }, 00:23:22.433 "memory_domains": [ 00:23:22.433 { 00:23:22.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:22.433 "dma_device_type": 2 00:23:22.433 } 00:23:22.433 ], 00:23:22.433 "driver_specific": {} 00:23:22.433 } 00:23:22.433 ] 00:23:22.433 21:19:45 -- common/autotest_common.sh@895 -- # return 0 00:23:22.433 21:19:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:22.433 21:19:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:22.433 21:19:45 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:22.433 21:19:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:22.433 21:19:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:22.433 21:19:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:22.433 21:19:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:22.433 21:19:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:22.433 21:19:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:22.433 21:19:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:22.433 21:19:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:22.433 21:19:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:22.433 21:19:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.433 21:19:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.691 21:19:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:22.691 "name": "Existed_Raid", 00:23:22.691 "uuid": "33453dcc-887c-4ae8-865d-9c8be6aa1b0b", 00:23:22.691 "strip_size_kb": 64, 00:23:22.691 "state": "online", 00:23:22.691 "raid_level": "raid5f", 00:23:22.691 "superblock": true, 00:23:22.691 "num_base_bdevs": 4, 00:23:22.691 "num_base_bdevs_discovered": 4, 00:23:22.691 "num_base_bdevs_operational": 4, 00:23:22.691 "base_bdevs_list": [ 00:23:22.691 { 00:23:22.691 "name": "BaseBdev1", 00:23:22.691 "uuid": "fecc811b-810d-4189-b146-28477740534f", 00:23:22.691 "is_configured": true, 00:23:22.691 "data_offset": 2048, 00:23:22.691 "data_size": 63488 00:23:22.691 }, 00:23:22.691 { 00:23:22.691 "name": "BaseBdev2", 00:23:22.691 "uuid": "08c68de8-d839-4876-bb4b-a93b0b5e8e4d", 00:23:22.691 "is_configured": true, 00:23:22.691 "data_offset": 2048, 00:23:22.691 "data_size": 63488 00:23:22.691 }, 00:23:22.691 { 00:23:22.691 "name": "BaseBdev3", 00:23:22.691 "uuid": "65a6895a-21f3-4a42-b02b-21545d39f2ab", 00:23:22.691 "is_configured": true, 00:23:22.691 "data_offset": 2048, 00:23:22.691 "data_size": 63488 00:23:22.691 }, 00:23:22.691 { 00:23:22.691 "name": "BaseBdev4", 00:23:22.691 "uuid": "b8764254-e2ad-47b2-9b6d-ade305a675ee", 00:23:22.691 "is_configured": true, 00:23:22.691 "data_offset": 2048, 00:23:22.691 "data_size": 63488 00:23:22.691 } 00:23:22.691 ] 00:23:22.691 }' 00:23:22.691 21:19:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:22.691 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:23:23.625 21:19:46 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:23.625 [2024-06-07 21:19:46.276465] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.883 21:19:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:24.140 21:19:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:24.140 "name": "Existed_Raid", 00:23:24.140 "uuid": "33453dcc-887c-4ae8-865d-9c8be6aa1b0b", 00:23:24.140 "strip_size_kb": 64, 00:23:24.140 "state": "online", 00:23:24.140 "raid_level": "raid5f", 00:23:24.140 "superblock": true, 00:23:24.140 "num_base_bdevs": 4, 00:23:24.140 "num_base_bdevs_discovered": 3, 00:23:24.140 "num_base_bdevs_operational": 3, 00:23:24.140 "base_bdevs_list": [ 00:23:24.140 { 00:23:24.140 "name": null, 00:23:24.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.140 "is_configured": false, 00:23:24.140 "data_offset": 2048, 00:23:24.140 "data_size": 63488 00:23:24.140 }, 00:23:24.140 { 00:23:24.140 "name": "BaseBdev2", 00:23:24.140 "uuid": "08c68de8-d839-4876-bb4b-a93b0b5e8e4d", 00:23:24.140 "is_configured": true, 00:23:24.140 "data_offset": 2048, 00:23:24.140 "data_size": 63488 00:23:24.140 }, 00:23:24.140 { 00:23:24.140 "name": "BaseBdev3", 00:23:24.140 "uuid": "65a6895a-21f3-4a42-b02b-21545d39f2ab", 00:23:24.140 "is_configured": true, 00:23:24.140 "data_offset": 2048, 00:23:24.140 "data_size": 63488 00:23:24.140 }, 00:23:24.140 { 00:23:24.140 "name": "BaseBdev4", 00:23:24.140 "uuid": "b8764254-e2ad-47b2-9b6d-ade305a675ee", 00:23:24.140 "is_configured": true, 00:23:24.140 "data_offset": 2048, 00:23:24.140 "data_size": 63488 00:23:24.140 } 00:23:24.140 ] 00:23:24.140 }' 00:23:24.140 21:19:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:24.140 21:19:46 -- common/autotest_common.sh@10 -- # set +x 00:23:24.706 21:19:47 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:24.706 21:19:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:24.706 21:19:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.706 21:19:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:24.964 21:19:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:24.964 21:19:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:24.964 21:19:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:25.223 [2024-06-07 21:19:47.654974] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:25.223 [2024-06-07 21:19:47.655011] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:25.223 [2024-06-07 21:19:47.655094] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:25.223 21:19:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:25.223 21:19:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:25.223 21:19:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.223 21:19:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:25.223 21:19:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:25.223 21:19:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:25.223 21:19:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:25.481 [2024-06-07 21:19:48.062294] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:25.481 21:19:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:25.481 21:19:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:25.481 21:19:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.481 21:19:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:25.739 21:19:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:25.739 21:19:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:25.739 21:19:48 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:25.998 [2024-06-07 21:19:48.512398] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:25.998 [2024-06-07 21:19:48.512450] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:23:25.998 21:19:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:25.998 21:19:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:25.998 21:19:48 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.998 21:19:48 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:26.256 21:19:48 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:26.256 21:19:48 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:26.256 21:19:48 -- bdev/bdev_raid.sh@287 -- # killprocess 143847 00:23:26.256 21:19:48 -- common/autotest_common.sh@926 -- # '[' -z 143847 ']' 00:23:26.256 21:19:48 -- common/autotest_common.sh@930 -- # kill -0 143847 00:23:26.256 21:19:48 -- common/autotest_common.sh@931 -- # uname 00:23:26.256 21:19:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:26.256 21:19:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 143847 00:23:26.256 killing process with pid 143847 00:23:26.256 21:19:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:26.256 21:19:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:26.256 21:19:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 143847' 00:23:26.256 21:19:48 -- common/autotest_common.sh@945 -- # kill 143847 00:23:26.256 21:19:48 -- common/autotest_common.sh@950 -- # wait 143847 00:23:26.256 [2024-06-07 21:19:48.811994] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:26.256 [2024-06-07 21:19:48.812103] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:26.515 ************************************ 00:23:26.515 END TEST raid5f_state_function_test_sb 00:23:26.515 ************************************ 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:26.515 00:23:26.515 real 0m13.897s 00:23:26.515 user 0m25.991s 00:23:26.515 sys 0m1.681s 00:23:26.515 21:19:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:26.515 21:19:49 -- common/autotest_common.sh@10 -- # set +x 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:23:26.515 21:19:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:23:26.515 21:19:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:26.515 21:19:49 -- common/autotest_common.sh@10 -- # set +x 00:23:26.515 ************************************ 00:23:26.515 START TEST raid5f_superblock_test 00:23:26.515 ************************************ 00:23:26.515 21:19:49 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 4 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@357 -- # raid_pid=144308 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@358 -- # waitforlisten 144308 /var/tmp/spdk-raid.sock 00:23:26.515 21:19:49 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:26.515 21:19:49 -- common/autotest_common.sh@819 -- # '[' -z 144308 ']' 00:23:26.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:26.515 21:19:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:26.515 21:19:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:26.515 21:19:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:26.515 21:19:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:26.515 21:19:49 -- common/autotest_common.sh@10 -- # set +x 00:23:26.515 [2024-06-07 21:19:49.136633] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:26.516 [2024-06-07 21:19:49.136831] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144308 ] 00:23:26.774 [2024-06-07 21:19:49.289167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.774 [2024-06-07 21:19:49.342369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.774 [2024-06-07 21:19:49.396273] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:27.342 21:19:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:27.342 21:19:50 -- common/autotest_common.sh@852 -- # return 0 00:23:27.342 21:19:50 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:23:27.342 21:19:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:27.342 21:19:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:23:27.342 21:19:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:23:27.342 21:19:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:27.342 21:19:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:27.342 21:19:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:27.342 21:19:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:27.342 21:19:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:27.600 malloc1 00:23:27.859 21:19:50 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:27.859 [2024-06-07 21:19:50.465794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:27.859 [2024-06-07 21:19:50.465926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.859 [2024-06-07 21:19:50.465962] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:27.859 [2024-06-07 21:19:50.466008] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.859 [2024-06-07 21:19:50.468592] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.859 [2024-06-07 21:19:50.468652] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:27.859 pt1 00:23:27.859 21:19:50 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:27.859 21:19:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:27.859 21:19:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:23:27.859 21:19:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:23:27.859 21:19:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:27.859 21:19:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:27.859 21:19:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:27.859 21:19:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:27.859 21:19:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:28.118 malloc2 00:23:28.118 21:19:50 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:28.375 [2024-06-07 21:19:50.928515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:28.375 [2024-06-07 21:19:50.928627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.375 [2024-06-07 21:19:50.928672] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:28.375 [2024-06-07 21:19:50.928729] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.375 [2024-06-07 21:19:50.931187] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.375 [2024-06-07 21:19:50.931255] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:28.375 pt2 00:23:28.375 21:19:50 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:28.375 21:19:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:28.375 21:19:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:23:28.375 21:19:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:23:28.375 21:19:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:28.375 21:19:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:28.375 21:19:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:28.375 21:19:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:28.375 21:19:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:28.633 malloc3 00:23:28.633 21:19:51 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:28.893 [2024-06-07 21:19:51.448576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:28.893 [2024-06-07 21:19:51.448695] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.893 [2024-06-07 21:19:51.448739] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:28.893 [2024-06-07 21:19:51.448785] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.893 [2024-06-07 21:19:51.451196] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.893 [2024-06-07 21:19:51.451265] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:28.893 pt3 00:23:28.893 21:19:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:28.893 21:19:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:28.893 21:19:51 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:23:28.893 21:19:51 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:23:28.893 21:19:51 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:28.893 21:19:51 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:28.893 21:19:51 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:28.893 21:19:51 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:28.893 21:19:51 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:23:29.151 malloc4 00:23:29.151 21:19:51 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:29.410 [2024-06-07 21:19:51.859316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:29.410 [2024-06-07 21:19:51.859424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.410 [2024-06-07 21:19:51.859467] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:29.410 [2024-06-07 21:19:51.859504] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.410 [2024-06-07 21:19:51.861679] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.410 [2024-06-07 21:19:51.861743] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:29.410 pt4 00:23:29.410 21:19:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:29.410 21:19:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:29.410 21:19:51 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:23:29.410 [2024-06-07 21:19:52.055425] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:29.410 [2024-06-07 21:19:52.057240] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:29.410 [2024-06-07 21:19:52.057334] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:29.410 [2024-06-07 21:19:52.057408] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:29.410 [2024-06-07 21:19:52.057654] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:23:29.410 [2024-06-07 21:19:52.057672] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:29.410 [2024-06-07 21:19:52.057814] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:23:29.410 [2024-06-07 21:19:52.058538] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:23:29.410 [2024-06-07 21:19:52.058563] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:23:29.410 [2024-06-07 21:19:52.058718] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:29.410 21:19:52 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:29.410 21:19:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:29.410 21:19:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:29.410 21:19:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:29.410 21:19:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:29.410 21:19:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:29.410 21:19:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:29.410 21:19:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:29.410 21:19:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:29.410 21:19:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:29.410 21:19:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.410 21:19:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.976 21:19:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:29.976 "name": "raid_bdev1", 00:23:29.976 "uuid": "20fdf2ac-e23d-4646-94b0-5cdb5c73434e", 00:23:29.976 "strip_size_kb": 64, 00:23:29.976 "state": "online", 00:23:29.976 "raid_level": "raid5f", 00:23:29.976 "superblock": true, 00:23:29.976 "num_base_bdevs": 4, 00:23:29.976 "num_base_bdevs_discovered": 4, 00:23:29.976 "num_base_bdevs_operational": 4, 00:23:29.976 "base_bdevs_list": [ 00:23:29.976 { 00:23:29.976 "name": "pt1", 00:23:29.976 "uuid": "5c6697b6-ea1b-5df9-b5c0-6b41923c5e3a", 00:23:29.976 "is_configured": true, 00:23:29.976 "data_offset": 2048, 00:23:29.976 "data_size": 63488 00:23:29.976 }, 00:23:29.976 { 00:23:29.976 "name": "pt2", 00:23:29.976 "uuid": "949a2fe5-848c-584b-b634-e085a16837f0", 00:23:29.976 "is_configured": true, 00:23:29.976 "data_offset": 2048, 00:23:29.976 "data_size": 63488 00:23:29.976 }, 00:23:29.976 { 00:23:29.976 "name": "pt3", 00:23:29.976 "uuid": "3bef073d-2ecd-5dd0-89b5-9e243567d858", 00:23:29.976 "is_configured": true, 00:23:29.976 "data_offset": 2048, 00:23:29.976 "data_size": 63488 00:23:29.976 }, 00:23:29.976 { 00:23:29.976 "name": "pt4", 00:23:29.976 "uuid": "08cb5332-82e7-51b8-9652-06a310c3e398", 00:23:29.976 "is_configured": true, 00:23:29.976 "data_offset": 2048, 00:23:29.976 "data_size": 63488 00:23:29.976 } 00:23:29.976 ] 00:23:29.976 }' 00:23:29.976 21:19:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:29.976 21:19:52 -- common/autotest_common.sh@10 -- # set +x 00:23:30.543 21:19:53 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:30.543 21:19:53 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:23:30.800 [2024-06-07 21:19:53.265371] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:30.800 21:19:53 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=20fdf2ac-e23d-4646-94b0-5cdb5c73434e 00:23:30.800 21:19:53 -- bdev/bdev_raid.sh@380 -- # '[' -z 20fdf2ac-e23d-4646-94b0-5cdb5c73434e ']' 00:23:30.800 21:19:53 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:30.800 [2024-06-07 21:19:53.465238] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:30.800 [2024-06-07 21:19:53.465264] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:30.800 [2024-06-07 21:19:53.465383] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:30.800 [2024-06-07 21:19:53.465477] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:30.800 [2024-06-07 21:19:53.465521] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:23:31.057 21:19:53 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.057 21:19:53 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:23:31.057 21:19:53 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:23:31.057 21:19:53 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:23:31.057 21:19:53 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:31.057 21:19:53 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:31.324 21:19:53 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:31.324 21:19:53 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:31.582 21:19:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:31.582 21:19:54 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:31.839 21:19:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:31.839 21:19:54 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:32.097 21:19:54 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:32.097 21:19:54 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:32.354 21:19:54 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:23:32.354 21:19:54 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:32.354 21:19:54 -- common/autotest_common.sh@640 -- # local es=0 00:23:32.354 21:19:54 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:32.354 21:19:54 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:32.354 21:19:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:32.354 21:19:54 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:32.354 21:19:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:32.354 21:19:54 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:32.354 21:19:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:32.354 21:19:54 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:32.354 21:19:54 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:32.354 21:19:54 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:32.612 [2024-06-07 21:19:55.033632] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:32.612 [2024-06-07 21:19:55.035430] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:32.612 [2024-06-07 21:19:55.035520] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:32.612 [2024-06-07 21:19:55.035563] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:32.612 [2024-06-07 21:19:55.035643] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:23:32.612 [2024-06-07 21:19:55.035759] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:23:32.612 [2024-06-07 21:19:55.035797] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:23:32.612 [2024-06-07 21:19:55.035855] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:23:32.612 [2024-06-07 21:19:55.035882] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:32.612 [2024-06-07 21:19:55.035894] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:23:32.612 request: 00:23:32.612 { 00:23:32.612 "name": "raid_bdev1", 00:23:32.612 "raid_level": "raid5f", 00:23:32.612 "base_bdevs": [ 00:23:32.612 "malloc1", 00:23:32.612 "malloc2", 00:23:32.612 "malloc3", 00:23:32.612 "malloc4" 00:23:32.612 ], 00:23:32.612 "superblock": false, 00:23:32.612 "strip_size_kb": 64, 00:23:32.612 "method": "bdev_raid_create", 00:23:32.612 "req_id": 1 00:23:32.612 } 00:23:32.612 Got JSON-RPC error response 00:23:32.612 response: 00:23:32.612 { 00:23:32.612 "code": -17, 00:23:32.612 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:32.612 } 00:23:32.612 21:19:55 -- common/autotest_common.sh@643 -- # es=1 00:23:32.612 21:19:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:32.612 21:19:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:32.612 21:19:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:32.612 21:19:55 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.612 21:19:55 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:23:32.612 21:19:55 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:23:32.612 21:19:55 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:23:32.612 21:19:55 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:32.870 [2024-06-07 21:19:55.465663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:32.870 [2024-06-07 21:19:55.465798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.870 [2024-06-07 21:19:55.465833] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:32.870 [2024-06-07 21:19:55.465862] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.870 [2024-06-07 21:19:55.468191] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.870 [2024-06-07 21:19:55.468274] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:32.870 [2024-06-07 21:19:55.468381] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:32.870 [2024-06-07 21:19:55.468487] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:32.870 pt1 00:23:32.870 21:19:55 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:32.870 21:19:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:32.870 21:19:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:32.870 21:19:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:32.870 21:19:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:32.870 21:19:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:32.870 21:19:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:32.870 21:19:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:32.870 21:19:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:32.870 21:19:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:32.870 21:19:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.870 21:19:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.128 21:19:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:33.128 "name": "raid_bdev1", 00:23:33.128 "uuid": "20fdf2ac-e23d-4646-94b0-5cdb5c73434e", 00:23:33.128 "strip_size_kb": 64, 00:23:33.128 "state": "configuring", 00:23:33.128 "raid_level": "raid5f", 00:23:33.128 "superblock": true, 00:23:33.128 "num_base_bdevs": 4, 00:23:33.128 "num_base_bdevs_discovered": 1, 00:23:33.128 "num_base_bdevs_operational": 4, 00:23:33.128 "base_bdevs_list": [ 00:23:33.128 { 00:23:33.128 "name": "pt1", 00:23:33.128 "uuid": "5c6697b6-ea1b-5df9-b5c0-6b41923c5e3a", 00:23:33.128 "is_configured": true, 00:23:33.128 "data_offset": 2048, 00:23:33.128 "data_size": 63488 00:23:33.128 }, 00:23:33.128 { 00:23:33.128 "name": null, 00:23:33.128 "uuid": "949a2fe5-848c-584b-b634-e085a16837f0", 00:23:33.128 "is_configured": false, 00:23:33.128 "data_offset": 2048, 00:23:33.128 "data_size": 63488 00:23:33.128 }, 00:23:33.128 { 00:23:33.128 "name": null, 00:23:33.128 "uuid": "3bef073d-2ecd-5dd0-89b5-9e243567d858", 00:23:33.128 "is_configured": false, 00:23:33.128 "data_offset": 2048, 00:23:33.128 "data_size": 63488 00:23:33.128 }, 00:23:33.128 { 00:23:33.128 "name": null, 00:23:33.128 "uuid": "08cb5332-82e7-51b8-9652-06a310c3e398", 00:23:33.128 "is_configured": false, 00:23:33.128 "data_offset": 2048, 00:23:33.128 "data_size": 63488 00:23:33.128 } 00:23:33.128 ] 00:23:33.128 }' 00:23:33.128 21:19:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:33.128 21:19:55 -- common/autotest_common.sh@10 -- # set +x 00:23:34.063 21:19:56 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:23:34.063 21:19:56 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:34.063 [2024-06-07 21:19:56.573994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:34.063 [2024-06-07 21:19:56.574084] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.063 [2024-06-07 21:19:56.574144] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:34.063 [2024-06-07 21:19:56.574180] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.063 [2024-06-07 21:19:56.574685] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.063 [2024-06-07 21:19:56.574755] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:34.063 [2024-06-07 21:19:56.574846] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:34.063 [2024-06-07 21:19:56.574890] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:34.063 pt2 00:23:34.063 21:19:56 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:34.322 [2024-06-07 21:19:56.834063] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:34.322 21:19:56 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:34.322 21:19:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:34.322 21:19:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:34.322 21:19:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:34.322 21:19:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:34.322 21:19:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:34.322 21:19:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:34.322 21:19:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:34.322 21:19:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:34.322 21:19:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:34.322 21:19:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.322 21:19:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.580 21:19:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:34.580 "name": "raid_bdev1", 00:23:34.580 "uuid": "20fdf2ac-e23d-4646-94b0-5cdb5c73434e", 00:23:34.580 "strip_size_kb": 64, 00:23:34.580 "state": "configuring", 00:23:34.580 "raid_level": "raid5f", 00:23:34.580 "superblock": true, 00:23:34.580 "num_base_bdevs": 4, 00:23:34.580 "num_base_bdevs_discovered": 1, 00:23:34.580 "num_base_bdevs_operational": 4, 00:23:34.580 "base_bdevs_list": [ 00:23:34.580 { 00:23:34.580 "name": "pt1", 00:23:34.580 "uuid": "5c6697b6-ea1b-5df9-b5c0-6b41923c5e3a", 00:23:34.580 "is_configured": true, 00:23:34.580 "data_offset": 2048, 00:23:34.580 "data_size": 63488 00:23:34.580 }, 00:23:34.580 { 00:23:34.580 "name": null, 00:23:34.580 "uuid": "949a2fe5-848c-584b-b634-e085a16837f0", 00:23:34.580 "is_configured": false, 00:23:34.580 "data_offset": 2048, 00:23:34.580 "data_size": 63488 00:23:34.580 }, 00:23:34.580 { 00:23:34.580 "name": null, 00:23:34.580 "uuid": "3bef073d-2ecd-5dd0-89b5-9e243567d858", 00:23:34.580 "is_configured": false, 00:23:34.580 "data_offset": 2048, 00:23:34.580 "data_size": 63488 00:23:34.580 }, 00:23:34.580 { 00:23:34.580 "name": null, 00:23:34.580 "uuid": "08cb5332-82e7-51b8-9652-06a310c3e398", 00:23:34.580 "is_configured": false, 00:23:34.580 "data_offset": 2048, 00:23:34.580 "data_size": 63488 00:23:34.580 } 00:23:34.580 ] 00:23:34.580 }' 00:23:34.580 21:19:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:34.580 21:19:57 -- common/autotest_common.sh@10 -- # set +x 00:23:35.146 21:19:57 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:23:35.146 21:19:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:35.146 21:19:57 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:35.405 [2024-06-07 21:19:58.010351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:35.405 [2024-06-07 21:19:58.010445] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.405 [2024-06-07 21:19:58.010485] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:35.405 [2024-06-07 21:19:58.010506] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.405 [2024-06-07 21:19:58.010918] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.405 [2024-06-07 21:19:58.010964] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:35.405 [2024-06-07 21:19:58.011041] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:35.405 [2024-06-07 21:19:58.011067] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:35.405 pt2 00:23:35.405 21:19:58 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:35.405 21:19:58 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:35.405 21:19:58 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:35.663 [2024-06-07 21:19:58.262418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:35.663 [2024-06-07 21:19:58.262514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.663 [2024-06-07 21:19:58.262547] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:35.663 [2024-06-07 21:19:58.262573] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.663 [2024-06-07 21:19:58.263008] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.663 [2024-06-07 21:19:58.263061] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:35.663 [2024-06-07 21:19:58.263135] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:35.663 [2024-06-07 21:19:58.263161] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:35.663 pt3 00:23:35.663 21:19:58 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:35.663 21:19:58 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:35.663 21:19:58 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:35.922 [2024-06-07 21:19:58.462439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:35.922 [2024-06-07 21:19:58.462530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.922 [2024-06-07 21:19:58.462565] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:35.922 [2024-06-07 21:19:58.462591] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.922 [2024-06-07 21:19:58.462987] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.922 [2024-06-07 21:19:58.463032] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:35.922 [2024-06-07 21:19:58.463106] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:23:35.922 [2024-06-07 21:19:58.463132] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:35.922 [2024-06-07 21:19:58.463279] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:23:35.922 [2024-06-07 21:19:58.463292] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:35.922 [2024-06-07 21:19:58.463368] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:35.922 [2024-06-07 21:19:58.464209] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:23:35.922 [2024-06-07 21:19:58.464234] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:23:35.922 [2024-06-07 21:19:58.464384] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.922 pt4 00:23:35.922 21:19:58 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:35.922 21:19:58 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:35.922 21:19:58 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:35.922 21:19:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:35.922 21:19:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:35.922 21:19:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:35.922 21:19:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:35.922 21:19:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:35.922 21:19:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:35.922 21:19:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:35.922 21:19:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:35.922 21:19:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:35.922 21:19:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.922 21:19:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.181 21:19:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:36.181 "name": "raid_bdev1", 00:23:36.181 "uuid": "20fdf2ac-e23d-4646-94b0-5cdb5c73434e", 00:23:36.181 "strip_size_kb": 64, 00:23:36.181 "state": "online", 00:23:36.181 "raid_level": "raid5f", 00:23:36.181 "superblock": true, 00:23:36.181 "num_base_bdevs": 4, 00:23:36.181 "num_base_bdevs_discovered": 4, 00:23:36.181 "num_base_bdevs_operational": 4, 00:23:36.181 "base_bdevs_list": [ 00:23:36.181 { 00:23:36.181 "name": "pt1", 00:23:36.181 "uuid": "5c6697b6-ea1b-5df9-b5c0-6b41923c5e3a", 00:23:36.181 "is_configured": true, 00:23:36.181 "data_offset": 2048, 00:23:36.181 "data_size": 63488 00:23:36.181 }, 00:23:36.181 { 00:23:36.181 "name": "pt2", 00:23:36.181 "uuid": "949a2fe5-848c-584b-b634-e085a16837f0", 00:23:36.181 "is_configured": true, 00:23:36.181 "data_offset": 2048, 00:23:36.181 "data_size": 63488 00:23:36.181 }, 00:23:36.181 { 00:23:36.181 "name": "pt3", 00:23:36.181 "uuid": "3bef073d-2ecd-5dd0-89b5-9e243567d858", 00:23:36.181 "is_configured": true, 00:23:36.181 "data_offset": 2048, 00:23:36.181 "data_size": 63488 00:23:36.181 }, 00:23:36.181 { 00:23:36.181 "name": "pt4", 00:23:36.181 "uuid": "08cb5332-82e7-51b8-9652-06a310c3e398", 00:23:36.181 "is_configured": true, 00:23:36.181 "data_offset": 2048, 00:23:36.181 "data_size": 63488 00:23:36.181 } 00:23:36.181 ] 00:23:36.181 }' 00:23:36.181 21:19:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:36.181 21:19:58 -- common/autotest_common.sh@10 -- # set +x 00:23:36.747 21:19:59 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:36.747 21:19:59 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:23:37.005 [2024-06-07 21:19:59.618842] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:37.005 21:19:59 -- bdev/bdev_raid.sh@430 -- # '[' 20fdf2ac-e23d-4646-94b0-5cdb5c73434e '!=' 20fdf2ac-e23d-4646-94b0-5cdb5c73434e ']' 00:23:37.005 21:19:59 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:23:37.005 21:19:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:37.005 21:19:59 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:37.005 21:19:59 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:37.264 [2024-06-07 21:19:59.878802] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:37.264 21:19:59 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:37.264 21:19:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:37.264 21:19:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:37.264 21:19:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:37.264 21:19:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:37.264 21:19:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:37.264 21:19:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:37.264 21:19:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:37.264 21:19:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:37.264 21:19:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:37.264 21:19:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.264 21:19:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.522 21:20:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:37.522 "name": "raid_bdev1", 00:23:37.522 "uuid": "20fdf2ac-e23d-4646-94b0-5cdb5c73434e", 00:23:37.522 "strip_size_kb": 64, 00:23:37.522 "state": "online", 00:23:37.522 "raid_level": "raid5f", 00:23:37.522 "superblock": true, 00:23:37.522 "num_base_bdevs": 4, 00:23:37.522 "num_base_bdevs_discovered": 3, 00:23:37.522 "num_base_bdevs_operational": 3, 00:23:37.522 "base_bdevs_list": [ 00:23:37.522 { 00:23:37.522 "name": null, 00:23:37.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.522 "is_configured": false, 00:23:37.522 "data_offset": 2048, 00:23:37.522 "data_size": 63488 00:23:37.522 }, 00:23:37.522 { 00:23:37.522 "name": "pt2", 00:23:37.522 "uuid": "949a2fe5-848c-584b-b634-e085a16837f0", 00:23:37.522 "is_configured": true, 00:23:37.522 "data_offset": 2048, 00:23:37.522 "data_size": 63488 00:23:37.522 }, 00:23:37.522 { 00:23:37.522 "name": "pt3", 00:23:37.522 "uuid": "3bef073d-2ecd-5dd0-89b5-9e243567d858", 00:23:37.522 "is_configured": true, 00:23:37.522 "data_offset": 2048, 00:23:37.522 "data_size": 63488 00:23:37.522 }, 00:23:37.522 { 00:23:37.522 "name": "pt4", 00:23:37.522 "uuid": "08cb5332-82e7-51b8-9652-06a310c3e398", 00:23:37.522 "is_configured": true, 00:23:37.522 "data_offset": 2048, 00:23:37.522 "data_size": 63488 00:23:37.522 } 00:23:37.522 ] 00:23:37.522 }' 00:23:37.522 21:20:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:37.522 21:20:00 -- common/autotest_common.sh@10 -- # set +x 00:23:38.089 21:20:00 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:38.348 [2024-06-07 21:20:00.943002] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:38.348 [2024-06-07 21:20:00.943042] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:38.348 [2024-06-07 21:20:00.943124] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:38.348 [2024-06-07 21:20:00.943202] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:38.348 [2024-06-07 21:20:00.943214] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:23:38.348 21:20:00 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.348 21:20:00 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:23:38.606 21:20:01 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:23:38.606 21:20:01 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:23:38.607 21:20:01 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:23:38.607 21:20:01 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:38.607 21:20:01 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:38.864 21:20:01 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:38.864 21:20:01 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:38.864 21:20:01 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:39.122 21:20:01 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:39.122 21:20:01 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:39.122 21:20:01 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:39.379 21:20:01 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:39.379 21:20:01 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:39.379 21:20:01 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:23:39.379 21:20:01 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:39.379 21:20:01 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:39.379 [2024-06-07 21:20:01.999131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:39.379 [2024-06-07 21:20:01.999223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.379 [2024-06-07 21:20:01.999255] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:23:39.379 [2024-06-07 21:20:01.999281] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.379 [2024-06-07 21:20:02.001523] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.379 [2024-06-07 21:20:02.001602] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:39.379 [2024-06-07 21:20:02.001775] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:39.379 [2024-06-07 21:20:02.001826] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:39.379 pt2 00:23:39.379 21:20:02 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:39.379 21:20:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:39.379 21:20:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:39.379 21:20:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:39.379 21:20:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:39.379 21:20:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:39.379 21:20:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:39.379 21:20:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:39.379 21:20:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:39.379 21:20:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:39.379 21:20:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.379 21:20:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.637 21:20:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:39.637 "name": "raid_bdev1", 00:23:39.637 "uuid": "20fdf2ac-e23d-4646-94b0-5cdb5c73434e", 00:23:39.637 "strip_size_kb": 64, 00:23:39.637 "state": "configuring", 00:23:39.637 "raid_level": "raid5f", 00:23:39.637 "superblock": true, 00:23:39.637 "num_base_bdevs": 4, 00:23:39.637 "num_base_bdevs_discovered": 1, 00:23:39.637 "num_base_bdevs_operational": 3, 00:23:39.637 "base_bdevs_list": [ 00:23:39.637 { 00:23:39.637 "name": null, 00:23:39.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.637 "is_configured": false, 00:23:39.637 "data_offset": 2048, 00:23:39.637 "data_size": 63488 00:23:39.637 }, 00:23:39.637 { 00:23:39.637 "name": "pt2", 00:23:39.637 "uuid": "949a2fe5-848c-584b-b634-e085a16837f0", 00:23:39.637 "is_configured": true, 00:23:39.637 "data_offset": 2048, 00:23:39.637 "data_size": 63488 00:23:39.637 }, 00:23:39.637 { 00:23:39.637 "name": null, 00:23:39.637 "uuid": "3bef073d-2ecd-5dd0-89b5-9e243567d858", 00:23:39.637 "is_configured": false, 00:23:39.637 "data_offset": 2048, 00:23:39.637 "data_size": 63488 00:23:39.637 }, 00:23:39.637 { 00:23:39.637 "name": null, 00:23:39.637 "uuid": "08cb5332-82e7-51b8-9652-06a310c3e398", 00:23:39.637 "is_configured": false, 00:23:39.637 "data_offset": 2048, 00:23:39.637 "data_size": 63488 00:23:39.637 } 00:23:39.637 ] 00:23:39.637 }' 00:23:39.637 21:20:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:39.637 21:20:02 -- common/autotest_common.sh@10 -- # set +x 00:23:40.202 21:20:02 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:40.202 21:20:02 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:40.202 21:20:02 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:40.463 [2024-06-07 21:20:03.007441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:40.463 [2024-06-07 21:20:03.007535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.463 [2024-06-07 21:20:03.007575] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:23:40.463 [2024-06-07 21:20:03.007604] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.463 [2024-06-07 21:20:03.008133] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.463 [2024-06-07 21:20:03.008176] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:40.463 [2024-06-07 21:20:03.008258] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:40.463 [2024-06-07 21:20:03.008287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:40.463 pt3 00:23:40.463 21:20:03 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:40.463 21:20:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:40.463 21:20:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:40.463 21:20:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:40.463 21:20:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:40.463 21:20:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:40.463 21:20:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:40.463 21:20:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:40.463 21:20:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:40.463 21:20:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:40.463 21:20:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.463 21:20:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.722 21:20:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:40.722 "name": "raid_bdev1", 00:23:40.722 "uuid": "20fdf2ac-e23d-4646-94b0-5cdb5c73434e", 00:23:40.722 "strip_size_kb": 64, 00:23:40.722 "state": "configuring", 00:23:40.722 "raid_level": "raid5f", 00:23:40.722 "superblock": true, 00:23:40.722 "num_base_bdevs": 4, 00:23:40.722 "num_base_bdevs_discovered": 2, 00:23:40.722 "num_base_bdevs_operational": 3, 00:23:40.722 "base_bdevs_list": [ 00:23:40.722 { 00:23:40.722 "name": null, 00:23:40.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.722 "is_configured": false, 00:23:40.722 "data_offset": 2048, 00:23:40.722 "data_size": 63488 00:23:40.722 }, 00:23:40.722 { 00:23:40.722 "name": "pt2", 00:23:40.722 "uuid": "949a2fe5-848c-584b-b634-e085a16837f0", 00:23:40.722 "is_configured": true, 00:23:40.722 "data_offset": 2048, 00:23:40.722 "data_size": 63488 00:23:40.722 }, 00:23:40.722 { 00:23:40.722 "name": "pt3", 00:23:40.722 "uuid": "3bef073d-2ecd-5dd0-89b5-9e243567d858", 00:23:40.722 "is_configured": true, 00:23:40.723 "data_offset": 2048, 00:23:40.723 "data_size": 63488 00:23:40.723 }, 00:23:40.723 { 00:23:40.723 "name": null, 00:23:40.723 "uuid": "08cb5332-82e7-51b8-9652-06a310c3e398", 00:23:40.723 "is_configured": false, 00:23:40.723 "data_offset": 2048, 00:23:40.723 "data_size": 63488 00:23:40.723 } 00:23:40.723 ] 00:23:40.723 }' 00:23:40.723 21:20:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:40.723 21:20:03 -- common/autotest_common.sh@10 -- # set +x 00:23:41.287 21:20:03 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:41.287 21:20:03 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:41.287 21:20:03 -- bdev/bdev_raid.sh@462 -- # i=3 00:23:41.287 21:20:03 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:41.545 [2024-06-07 21:20:04.163694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:41.545 [2024-06-07 21:20:04.163823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:41.545 [2024-06-07 21:20:04.163872] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:23:41.545 [2024-06-07 21:20:04.163895] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:41.545 [2024-06-07 21:20:04.164450] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:41.545 [2024-06-07 21:20:04.164535] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:41.545 [2024-06-07 21:20:04.164621] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:23:41.545 [2024-06-07 21:20:04.164684] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:41.545 [2024-06-07 21:20:04.164857] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:23:41.545 [2024-06-07 21:20:04.164870] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:41.545 [2024-06-07 21:20:04.165001] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:41.545 [2024-06-07 21:20:04.165856] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:23:41.545 [2024-06-07 21:20:04.165880] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:23:41.545 [2024-06-07 21:20:04.166139] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.545 pt4 00:23:41.545 21:20:04 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:41.545 21:20:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:41.545 21:20:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:41.545 21:20:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:41.545 21:20:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:41.545 21:20:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:41.545 21:20:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:41.545 21:20:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:41.545 21:20:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:41.545 21:20:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:41.545 21:20:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.545 21:20:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.803 21:20:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:41.803 "name": "raid_bdev1", 00:23:41.803 "uuid": "20fdf2ac-e23d-4646-94b0-5cdb5c73434e", 00:23:41.803 "strip_size_kb": 64, 00:23:41.803 "state": "online", 00:23:41.803 "raid_level": "raid5f", 00:23:41.803 "superblock": true, 00:23:41.803 "num_base_bdevs": 4, 00:23:41.803 "num_base_bdevs_discovered": 3, 00:23:41.803 "num_base_bdevs_operational": 3, 00:23:41.803 "base_bdevs_list": [ 00:23:41.803 { 00:23:41.803 "name": null, 00:23:41.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.803 "is_configured": false, 00:23:41.803 "data_offset": 2048, 00:23:41.803 "data_size": 63488 00:23:41.803 }, 00:23:41.803 { 00:23:41.803 "name": "pt2", 00:23:41.803 "uuid": "949a2fe5-848c-584b-b634-e085a16837f0", 00:23:41.803 "is_configured": true, 00:23:41.803 "data_offset": 2048, 00:23:41.803 "data_size": 63488 00:23:41.803 }, 00:23:41.803 { 00:23:41.803 "name": "pt3", 00:23:41.803 "uuid": "3bef073d-2ecd-5dd0-89b5-9e243567d858", 00:23:41.803 "is_configured": true, 00:23:41.803 "data_offset": 2048, 00:23:41.803 "data_size": 63488 00:23:41.803 }, 00:23:41.803 { 00:23:41.803 "name": "pt4", 00:23:41.803 "uuid": "08cb5332-82e7-51b8-9652-06a310c3e398", 00:23:41.803 "is_configured": true, 00:23:41.803 "data_offset": 2048, 00:23:41.803 "data_size": 63488 00:23:41.803 } 00:23:41.803 ] 00:23:41.803 }' 00:23:41.803 21:20:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:41.803 21:20:04 -- common/autotest_common.sh@10 -- # set +x 00:23:42.736 21:20:05 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:23:42.736 21:20:05 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:42.736 [2024-06-07 21:20:05.369507] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:42.736 [2024-06-07 21:20:05.369542] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:42.737 [2024-06-07 21:20:05.369620] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:42.737 [2024-06-07 21:20:05.369701] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:42.737 [2024-06-07 21:20:05.369712] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:23:42.737 21:20:05 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.737 21:20:05 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:23:42.994 21:20:05 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:23:42.994 21:20:05 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:23:42.994 21:20:05 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:43.253 [2024-06-07 21:20:05.817191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:43.253 [2024-06-07 21:20:05.817283] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.253 [2024-06-07 21:20:05.817326] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:23:43.253 [2024-06-07 21:20:05.817349] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.253 [2024-06-07 21:20:05.819684] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.253 [2024-06-07 21:20:05.819766] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:43.253 [2024-06-07 21:20:05.819849] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:43.253 [2024-06-07 21:20:05.819887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:43.253 pt1 00:23:43.253 21:20:05 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:43.253 21:20:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:43.253 21:20:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:43.253 21:20:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:43.253 21:20:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:43.253 21:20:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:43.253 21:20:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:43.253 21:20:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:43.253 21:20:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:43.253 21:20:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:43.253 21:20:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.253 21:20:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.511 21:20:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:43.511 "name": "raid_bdev1", 00:23:43.511 "uuid": "20fdf2ac-e23d-4646-94b0-5cdb5c73434e", 00:23:43.511 "strip_size_kb": 64, 00:23:43.511 "state": "configuring", 00:23:43.511 "raid_level": "raid5f", 00:23:43.511 "superblock": true, 00:23:43.511 "num_base_bdevs": 4, 00:23:43.511 "num_base_bdevs_discovered": 1, 00:23:43.511 "num_base_bdevs_operational": 4, 00:23:43.511 "base_bdevs_list": [ 00:23:43.511 { 00:23:43.511 "name": "pt1", 00:23:43.511 "uuid": "5c6697b6-ea1b-5df9-b5c0-6b41923c5e3a", 00:23:43.511 "is_configured": true, 00:23:43.511 "data_offset": 2048, 00:23:43.511 "data_size": 63488 00:23:43.511 }, 00:23:43.511 { 00:23:43.511 "name": null, 00:23:43.511 "uuid": "949a2fe5-848c-584b-b634-e085a16837f0", 00:23:43.511 "is_configured": false, 00:23:43.511 "data_offset": 2048, 00:23:43.511 "data_size": 63488 00:23:43.511 }, 00:23:43.511 { 00:23:43.511 "name": null, 00:23:43.511 "uuid": "3bef073d-2ecd-5dd0-89b5-9e243567d858", 00:23:43.511 "is_configured": false, 00:23:43.511 "data_offset": 2048, 00:23:43.511 "data_size": 63488 00:23:43.511 }, 00:23:43.511 { 00:23:43.511 "name": null, 00:23:43.511 "uuid": "08cb5332-82e7-51b8-9652-06a310c3e398", 00:23:43.511 "is_configured": false, 00:23:43.511 "data_offset": 2048, 00:23:43.511 "data_size": 63488 00:23:43.511 } 00:23:43.511 ] 00:23:43.511 }' 00:23:43.511 21:20:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:43.511 21:20:06 -- common/autotest_common.sh@10 -- # set +x 00:23:44.078 21:20:06 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:23:44.078 21:20:06 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:44.078 21:20:06 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:44.336 21:20:06 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:44.336 21:20:06 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:44.336 21:20:06 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:44.594 21:20:07 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:44.594 21:20:07 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:44.594 21:20:07 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:44.852 21:20:07 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:44.852 21:20:07 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:44.852 21:20:07 -- bdev/bdev_raid.sh@489 -- # i=3 00:23:44.852 21:20:07 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:45.110 [2024-06-07 21:20:07.557269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:45.110 [2024-06-07 21:20:07.557361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.110 [2024-06-07 21:20:07.557394] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:23:45.110 [2024-06-07 21:20:07.557424] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.110 [2024-06-07 21:20:07.557906] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.110 [2024-06-07 21:20:07.557968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:45.110 [2024-06-07 21:20:07.558067] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:23:45.110 [2024-06-07 21:20:07.558083] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:45.110 [2024-06-07 21:20:07.558105] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:45.110 [2024-06-07 21:20:07.558140] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:23:45.110 [2024-06-07 21:20:07.558198] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:45.110 pt4 00:23:45.110 21:20:07 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:45.110 21:20:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:45.110 21:20:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:45.110 21:20:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:45.110 21:20:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:45.110 21:20:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:45.110 21:20:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:45.110 21:20:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:45.110 21:20:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:45.110 21:20:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:45.110 21:20:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.110 21:20:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.110 21:20:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:45.110 "name": "raid_bdev1", 00:23:45.110 "uuid": "20fdf2ac-e23d-4646-94b0-5cdb5c73434e", 00:23:45.110 "strip_size_kb": 64, 00:23:45.110 "state": "configuring", 00:23:45.110 "raid_level": "raid5f", 00:23:45.110 "superblock": true, 00:23:45.110 "num_base_bdevs": 4, 00:23:45.110 "num_base_bdevs_discovered": 1, 00:23:45.110 "num_base_bdevs_operational": 3, 00:23:45.110 "base_bdevs_list": [ 00:23:45.110 { 00:23:45.110 "name": null, 00:23:45.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.110 "is_configured": false, 00:23:45.110 "data_offset": 2048, 00:23:45.110 "data_size": 63488 00:23:45.110 }, 00:23:45.110 { 00:23:45.110 "name": null, 00:23:45.110 "uuid": "949a2fe5-848c-584b-b634-e085a16837f0", 00:23:45.110 "is_configured": false, 00:23:45.110 "data_offset": 2048, 00:23:45.110 "data_size": 63488 00:23:45.110 }, 00:23:45.110 { 00:23:45.110 "name": null, 00:23:45.110 "uuid": "3bef073d-2ecd-5dd0-89b5-9e243567d858", 00:23:45.110 "is_configured": false, 00:23:45.110 "data_offset": 2048, 00:23:45.110 "data_size": 63488 00:23:45.110 }, 00:23:45.110 { 00:23:45.110 "name": "pt4", 00:23:45.110 "uuid": "08cb5332-82e7-51b8-9652-06a310c3e398", 00:23:45.110 "is_configured": true, 00:23:45.110 "data_offset": 2048, 00:23:45.110 "data_size": 63488 00:23:45.110 } 00:23:45.110 ] 00:23:45.110 }' 00:23:45.110 21:20:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:45.110 21:20:07 -- common/autotest_common.sh@10 -- # set +x 00:23:46.044 21:20:08 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:23:46.044 21:20:08 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:46.044 21:20:08 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:46.044 [2024-06-07 21:20:08.629924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:46.044 [2024-06-07 21:20:08.630047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:46.044 [2024-06-07 21:20:08.630088] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:23:46.044 [2024-06-07 21:20:08.630113] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:46.044 [2024-06-07 21:20:08.630640] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:46.044 [2024-06-07 21:20:08.630717] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:46.044 [2024-06-07 21:20:08.630827] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:46.044 [2024-06-07 21:20:08.630858] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:46.044 pt2 00:23:46.044 21:20:08 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:46.044 21:20:08 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:46.044 21:20:08 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:46.304 [2024-06-07 21:20:08.821995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:46.304 [2024-06-07 21:20:08.822111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:46.304 [2024-06-07 21:20:08.822144] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:23:46.304 [2024-06-07 21:20:08.822168] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:46.304 [2024-06-07 21:20:08.822610] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:46.304 [2024-06-07 21:20:08.822670] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:46.304 [2024-06-07 21:20:08.822816] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:46.304 [2024-06-07 21:20:08.822844] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:46.304 [2024-06-07 21:20:08.822967] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:23:46.304 [2024-06-07 21:20:08.822998] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:46.304 [2024-06-07 21:20:08.823081] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:23:46.304 [2024-06-07 21:20:08.824118] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:23:46.304 [2024-06-07 21:20:08.824155] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:23:46.304 [2024-06-07 21:20:08.824328] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.304 pt3 00:23:46.304 21:20:08 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:46.304 21:20:08 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:46.304 21:20:08 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:46.304 21:20:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:46.304 21:20:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:46.304 21:20:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:46.304 21:20:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:46.304 21:20:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:46.304 21:20:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:46.304 21:20:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:46.304 21:20:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:46.304 21:20:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:46.304 21:20:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.304 21:20:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.562 21:20:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:46.562 "name": "raid_bdev1", 00:23:46.562 "uuid": "20fdf2ac-e23d-4646-94b0-5cdb5c73434e", 00:23:46.562 "strip_size_kb": 64, 00:23:46.562 "state": "online", 00:23:46.562 "raid_level": "raid5f", 00:23:46.562 "superblock": true, 00:23:46.562 "num_base_bdevs": 4, 00:23:46.562 "num_base_bdevs_discovered": 3, 00:23:46.562 "num_base_bdevs_operational": 3, 00:23:46.562 "base_bdevs_list": [ 00:23:46.562 { 00:23:46.562 "name": null, 00:23:46.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.562 "is_configured": false, 00:23:46.562 "data_offset": 2048, 00:23:46.562 "data_size": 63488 00:23:46.562 }, 00:23:46.562 { 00:23:46.562 "name": "pt2", 00:23:46.562 "uuid": "949a2fe5-848c-584b-b634-e085a16837f0", 00:23:46.562 "is_configured": true, 00:23:46.562 "data_offset": 2048, 00:23:46.562 "data_size": 63488 00:23:46.562 }, 00:23:46.562 { 00:23:46.562 "name": "pt3", 00:23:46.562 "uuid": "3bef073d-2ecd-5dd0-89b5-9e243567d858", 00:23:46.562 "is_configured": true, 00:23:46.562 "data_offset": 2048, 00:23:46.562 "data_size": 63488 00:23:46.562 }, 00:23:46.562 { 00:23:46.562 "name": "pt4", 00:23:46.562 "uuid": "08cb5332-82e7-51b8-9652-06a310c3e398", 00:23:46.562 "is_configured": true, 00:23:46.562 "data_offset": 2048, 00:23:46.562 "data_size": 63488 00:23:46.562 } 00:23:46.562 ] 00:23:46.562 }' 00:23:46.562 21:20:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:46.562 21:20:09 -- common/autotest_common.sh@10 -- # set +x 00:23:47.128 21:20:09 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:47.128 21:20:09 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:23:47.388 [2024-06-07 21:20:09.894846] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:47.388 21:20:09 -- bdev/bdev_raid.sh@506 -- # '[' 20fdf2ac-e23d-4646-94b0-5cdb5c73434e '!=' 20fdf2ac-e23d-4646-94b0-5cdb5c73434e ']' 00:23:47.388 21:20:09 -- bdev/bdev_raid.sh@511 -- # killprocess 144308 00:23:47.388 21:20:09 -- common/autotest_common.sh@926 -- # '[' -z 144308 ']' 00:23:47.388 21:20:09 -- common/autotest_common.sh@930 -- # kill -0 144308 00:23:47.388 21:20:09 -- common/autotest_common.sh@931 -- # uname 00:23:47.388 21:20:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:47.388 21:20:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 144308 00:23:47.388 killing process with pid 144308 00:23:47.388 21:20:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:47.388 21:20:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:47.388 21:20:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 144308' 00:23:47.388 21:20:09 -- common/autotest_common.sh@945 -- # kill 144308 00:23:47.388 21:20:09 -- common/autotest_common.sh@950 -- # wait 144308 00:23:47.388 [2024-06-07 21:20:09.927744] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:47.388 [2024-06-07 21:20:09.927828] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:47.388 [2024-06-07 21:20:09.927915] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:47.388 [2024-06-07 21:20:09.927927] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:23:47.388 [2024-06-07 21:20:09.967024] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:47.672 ************************************ 00:23:47.672 END TEST raid5f_superblock_test 00:23:47.672 ************************************ 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@513 -- # return 0 00:23:47.672 00:23:47.672 real 0m21.104s 00:23:47.672 user 0m40.200s 00:23:47.672 sys 0m2.328s 00:23:47.672 21:20:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:47.672 21:20:10 -- common/autotest_common.sh@10 -- # set +x 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:23:47.672 21:20:10 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:47.672 21:20:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:47.672 21:20:10 -- common/autotest_common.sh@10 -- # set +x 00:23:47.672 ************************************ 00:23:47.672 START TEST raid5f_rebuild_test 00:23:47.672 ************************************ 00:23:47.672 21:20:10 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 false false 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@544 -- # raid_pid=145011 00:23:47.672 21:20:10 -- bdev/bdev_raid.sh@545 -- # waitforlisten 145011 /var/tmp/spdk-raid.sock 00:23:47.672 21:20:10 -- common/autotest_common.sh@819 -- # '[' -z 145011 ']' 00:23:47.673 21:20:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:47.673 21:20:10 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:47.673 21:20:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:47.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:47.673 21:20:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:47.673 21:20:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:47.673 21:20:10 -- common/autotest_common.sh@10 -- # set +x 00:23:47.673 [2024-06-07 21:20:10.323124] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:47.673 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:47.673 Zero copy mechanism will not be used. 00:23:47.673 [2024-06-07 21:20:10.323466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145011 ] 00:23:47.946 [2024-06-07 21:20:10.512734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.946 [2024-06-07 21:20:10.589490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.205 [2024-06-07 21:20:10.648636] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:48.773 21:20:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:48.773 21:20:11 -- common/autotest_common.sh@852 -- # return 0 00:23:48.773 21:20:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:48.773 21:20:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:48.773 21:20:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:48.773 BaseBdev1 00:23:48.773 21:20:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:48.773 21:20:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:48.773 21:20:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:49.032 BaseBdev2 00:23:49.032 21:20:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:49.032 21:20:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:49.032 21:20:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:49.291 BaseBdev3 00:23:49.291 21:20:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:49.291 21:20:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:49.291 21:20:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:49.549 BaseBdev4 00:23:49.549 21:20:12 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:49.807 spare_malloc 00:23:49.807 21:20:12 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:50.066 spare_delay 00:23:50.066 21:20:12 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:50.324 [2024-06-07 21:20:12.751861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:50.325 [2024-06-07 21:20:12.751959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.325 [2024-06-07 21:20:12.752009] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:50.325 [2024-06-07 21:20:12.752058] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.325 [2024-06-07 21:20:12.755076] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.325 [2024-06-07 21:20:12.755124] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:50.325 spare 00:23:50.325 21:20:12 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:50.325 [2024-06-07 21:20:12.988095] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:50.325 [2024-06-07 21:20:12.990227] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:50.325 [2024-06-07 21:20:12.990295] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:50.325 [2024-06-07 21:20:12.990331] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:50.325 [2024-06-07 21:20:12.990413] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:23:50.325 [2024-06-07 21:20:12.990425] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:50.325 [2024-06-07 21:20:12.990650] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:23:50.325 [2024-06-07 21:20:12.991472] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:23:50.325 [2024-06-07 21:20:12.991495] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:23:50.325 [2024-06-07 21:20:12.991737] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:50.585 21:20:13 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:50.585 21:20:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:50.585 21:20:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:50.585 21:20:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:50.585 21:20:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:50.585 21:20:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:50.585 21:20:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:50.585 21:20:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:50.585 21:20:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:50.585 21:20:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:50.585 21:20:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.585 21:20:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.585 21:20:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:50.585 "name": "raid_bdev1", 00:23:50.585 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:23:50.585 "strip_size_kb": 64, 00:23:50.585 "state": "online", 00:23:50.585 "raid_level": "raid5f", 00:23:50.585 "superblock": false, 00:23:50.585 "num_base_bdevs": 4, 00:23:50.585 "num_base_bdevs_discovered": 4, 00:23:50.585 "num_base_bdevs_operational": 4, 00:23:50.585 "base_bdevs_list": [ 00:23:50.585 { 00:23:50.585 "name": "BaseBdev1", 00:23:50.585 "uuid": "9f8a6ff1-98e2-4c32-a0f2-3769c551d4ba", 00:23:50.585 "is_configured": true, 00:23:50.585 "data_offset": 0, 00:23:50.585 "data_size": 65536 00:23:50.585 }, 00:23:50.585 { 00:23:50.585 "name": "BaseBdev2", 00:23:50.585 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:23:50.585 "is_configured": true, 00:23:50.585 "data_offset": 0, 00:23:50.585 "data_size": 65536 00:23:50.585 }, 00:23:50.585 { 00:23:50.585 "name": "BaseBdev3", 00:23:50.585 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:23:50.585 "is_configured": true, 00:23:50.585 "data_offset": 0, 00:23:50.585 "data_size": 65536 00:23:50.585 }, 00:23:50.585 { 00:23:50.585 "name": "BaseBdev4", 00:23:50.585 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:23:50.586 "is_configured": true, 00:23:50.586 "data_offset": 0, 00:23:50.586 "data_size": 65536 00:23:50.586 } 00:23:50.586 ] 00:23:50.586 }' 00:23:50.586 21:20:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:50.586 21:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:51.152 21:20:13 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:51.152 21:20:13 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:51.411 [2024-06-07 21:20:13.998463] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:51.411 21:20:14 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:23:51.411 21:20:14 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.411 21:20:14 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:51.669 21:20:14 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:51.669 21:20:14 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:51.669 21:20:14 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:51.669 21:20:14 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:51.669 21:20:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:51.669 21:20:14 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:51.669 21:20:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:51.669 21:20:14 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:51.669 21:20:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:51.669 21:20:14 -- bdev/nbd_common.sh@12 -- # local i 00:23:51.669 21:20:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:51.669 21:20:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:51.669 21:20:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:51.928 [2024-06-07 21:20:14.434292] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:23:51.928 /dev/nbd0 00:23:51.928 21:20:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:51.928 21:20:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:51.928 21:20:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:51.928 21:20:14 -- common/autotest_common.sh@857 -- # local i 00:23:51.928 21:20:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:51.928 21:20:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:51.928 21:20:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:51.928 21:20:14 -- common/autotest_common.sh@861 -- # break 00:23:51.928 21:20:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:51.928 21:20:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:51.928 21:20:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:51.928 1+0 records in 00:23:51.928 1+0 records out 00:23:51.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227242 s, 18.0 MB/s 00:23:51.928 21:20:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.928 21:20:14 -- common/autotest_common.sh@874 -- # size=4096 00:23:51.928 21:20:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.928 21:20:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:51.928 21:20:14 -- common/autotest_common.sh@877 -- # return 0 00:23:51.928 21:20:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:51.928 21:20:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:51.928 21:20:14 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:51.928 21:20:14 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:23:51.928 21:20:14 -- bdev/bdev_raid.sh@582 -- # echo 192 00:23:51.928 21:20:14 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:23:52.493 512+0 records in 00:23:52.493 512+0 records out 00:23:52.493 100663296 bytes (101 MB, 96 MiB) copied, 0.423589 s, 238 MB/s 00:23:52.493 21:20:14 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:52.493 21:20:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:52.493 21:20:14 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:52.493 21:20:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:52.493 21:20:14 -- bdev/nbd_common.sh@51 -- # local i 00:23:52.493 21:20:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:52.493 21:20:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:52.493 21:20:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:52.493 21:20:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:52.493 21:20:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:52.493 21:20:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:52.493 21:20:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:52.493 21:20:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:52.493 21:20:15 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:52.493 [2024-06-07 21:20:15.134111] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:52.750 21:20:15 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:52.750 21:20:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:52.750 21:20:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:52.750 21:20:15 -- bdev/nbd_common.sh@41 -- # break 00:23:52.750 21:20:15 -- bdev/nbd_common.sh@45 -- # return 0 00:23:52.750 21:20:15 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:53.008 [2024-06-07 21:20:15.481773] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:53.008 21:20:15 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:53.008 21:20:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:53.008 21:20:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:53.008 21:20:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:53.008 21:20:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:53.008 21:20:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:53.008 21:20:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:53.008 21:20:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:53.008 21:20:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:53.008 21:20:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:53.008 21:20:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.008 21:20:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.265 21:20:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:53.265 "name": "raid_bdev1", 00:23:53.265 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:23:53.265 "strip_size_kb": 64, 00:23:53.265 "state": "online", 00:23:53.265 "raid_level": "raid5f", 00:23:53.265 "superblock": false, 00:23:53.265 "num_base_bdevs": 4, 00:23:53.265 "num_base_bdevs_discovered": 3, 00:23:53.265 "num_base_bdevs_operational": 3, 00:23:53.265 "base_bdevs_list": [ 00:23:53.265 { 00:23:53.265 "name": null, 00:23:53.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.265 "is_configured": false, 00:23:53.265 "data_offset": 0, 00:23:53.265 "data_size": 65536 00:23:53.265 }, 00:23:53.265 { 00:23:53.265 "name": "BaseBdev2", 00:23:53.265 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:23:53.265 "is_configured": true, 00:23:53.265 "data_offset": 0, 00:23:53.265 "data_size": 65536 00:23:53.266 }, 00:23:53.266 { 00:23:53.266 "name": "BaseBdev3", 00:23:53.266 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:23:53.266 "is_configured": true, 00:23:53.266 "data_offset": 0, 00:23:53.266 "data_size": 65536 00:23:53.266 }, 00:23:53.266 { 00:23:53.266 "name": "BaseBdev4", 00:23:53.266 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:23:53.266 "is_configured": true, 00:23:53.266 "data_offset": 0, 00:23:53.266 "data_size": 65536 00:23:53.266 } 00:23:53.266 ] 00:23:53.266 }' 00:23:53.266 21:20:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:53.266 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:53.831 21:20:16 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:54.089 [2024-06-07 21:20:16.590092] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:54.089 [2024-06-07 21:20:16.590161] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:54.089 [2024-06-07 21:20:16.594481] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cc70 00:23:54.089 [2024-06-07 21:20:16.596986] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:54.089 21:20:16 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:55.023 21:20:17 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:55.023 21:20:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:55.023 21:20:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:55.023 21:20:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:55.023 21:20:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:55.023 21:20:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.023 21:20:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.281 21:20:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:55.281 "name": "raid_bdev1", 00:23:55.281 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:23:55.281 "strip_size_kb": 64, 00:23:55.281 "state": "online", 00:23:55.281 "raid_level": "raid5f", 00:23:55.281 "superblock": false, 00:23:55.281 "num_base_bdevs": 4, 00:23:55.281 "num_base_bdevs_discovered": 4, 00:23:55.281 "num_base_bdevs_operational": 4, 00:23:55.281 "process": { 00:23:55.281 "type": "rebuild", 00:23:55.281 "target": "spare", 00:23:55.281 "progress": { 00:23:55.281 "blocks": 23040, 00:23:55.281 "percent": 11 00:23:55.281 } 00:23:55.281 }, 00:23:55.281 "base_bdevs_list": [ 00:23:55.281 { 00:23:55.281 "name": "spare", 00:23:55.281 "uuid": "d5ff967e-5210-5cea-9ec7-f4753357e9a2", 00:23:55.281 "is_configured": true, 00:23:55.281 "data_offset": 0, 00:23:55.281 "data_size": 65536 00:23:55.281 }, 00:23:55.281 { 00:23:55.281 "name": "BaseBdev2", 00:23:55.281 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:23:55.281 "is_configured": true, 00:23:55.281 "data_offset": 0, 00:23:55.281 "data_size": 65536 00:23:55.281 }, 00:23:55.281 { 00:23:55.281 "name": "BaseBdev3", 00:23:55.281 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:23:55.281 "is_configured": true, 00:23:55.281 "data_offset": 0, 00:23:55.281 "data_size": 65536 00:23:55.281 }, 00:23:55.281 { 00:23:55.281 "name": "BaseBdev4", 00:23:55.281 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:23:55.281 "is_configured": true, 00:23:55.281 "data_offset": 0, 00:23:55.281 "data_size": 65536 00:23:55.281 } 00:23:55.281 ] 00:23:55.281 }' 00:23:55.281 21:20:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:55.281 21:20:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:55.281 21:20:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:55.538 21:20:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:55.538 21:20:17 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:55.538 [2024-06-07 21:20:18.200307] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:55.538 [2024-06-07 21:20:18.210587] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:55.538 [2024-06-07 21:20:18.211209] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.845 21:20:18 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:55.845 21:20:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:55.845 21:20:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:55.845 21:20:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:55.845 21:20:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:55.845 21:20:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:55.845 21:20:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:55.845 21:20:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:55.845 21:20:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:55.845 21:20:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:55.845 21:20:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.845 21:20:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.845 21:20:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:55.845 "name": "raid_bdev1", 00:23:55.845 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:23:55.845 "strip_size_kb": 64, 00:23:55.845 "state": "online", 00:23:55.845 "raid_level": "raid5f", 00:23:55.845 "superblock": false, 00:23:55.845 "num_base_bdevs": 4, 00:23:55.845 "num_base_bdevs_discovered": 3, 00:23:55.845 "num_base_bdevs_operational": 3, 00:23:55.845 "base_bdevs_list": [ 00:23:55.846 { 00:23:55.846 "name": null, 00:23:55.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.846 "is_configured": false, 00:23:55.846 "data_offset": 0, 00:23:55.846 "data_size": 65536 00:23:55.846 }, 00:23:55.846 { 00:23:55.846 "name": "BaseBdev2", 00:23:55.846 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:23:55.846 "is_configured": true, 00:23:55.846 "data_offset": 0, 00:23:55.846 "data_size": 65536 00:23:55.846 }, 00:23:55.846 { 00:23:55.846 "name": "BaseBdev3", 00:23:55.846 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:23:55.846 "is_configured": true, 00:23:55.846 "data_offset": 0, 00:23:55.846 "data_size": 65536 00:23:55.846 }, 00:23:55.846 { 00:23:55.846 "name": "BaseBdev4", 00:23:55.846 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:23:55.846 "is_configured": true, 00:23:55.846 "data_offset": 0, 00:23:55.846 "data_size": 65536 00:23:55.846 } 00:23:55.846 ] 00:23:55.846 }' 00:23:55.846 21:20:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:55.846 21:20:18 -- common/autotest_common.sh@10 -- # set +x 00:23:56.795 21:20:19 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:56.795 21:20:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:56.795 21:20:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:56.795 21:20:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:56.795 21:20:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:56.795 21:20:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.795 21:20:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.795 21:20:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:56.795 "name": "raid_bdev1", 00:23:56.795 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:23:56.795 "strip_size_kb": 64, 00:23:56.795 "state": "online", 00:23:56.795 "raid_level": "raid5f", 00:23:56.795 "superblock": false, 00:23:56.795 "num_base_bdevs": 4, 00:23:56.795 "num_base_bdevs_discovered": 3, 00:23:56.795 "num_base_bdevs_operational": 3, 00:23:56.795 "base_bdevs_list": [ 00:23:56.795 { 00:23:56.795 "name": null, 00:23:56.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:56.795 "is_configured": false, 00:23:56.795 "data_offset": 0, 00:23:56.795 "data_size": 65536 00:23:56.795 }, 00:23:56.795 { 00:23:56.795 "name": "BaseBdev2", 00:23:56.795 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:23:56.795 "is_configured": true, 00:23:56.795 "data_offset": 0, 00:23:56.795 "data_size": 65536 00:23:56.795 }, 00:23:56.795 { 00:23:56.795 "name": "BaseBdev3", 00:23:56.795 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:23:56.795 "is_configured": true, 00:23:56.795 "data_offset": 0, 00:23:56.795 "data_size": 65536 00:23:56.795 }, 00:23:56.795 { 00:23:56.795 "name": "BaseBdev4", 00:23:56.795 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:23:56.795 "is_configured": true, 00:23:56.795 "data_offset": 0, 00:23:56.795 "data_size": 65536 00:23:56.795 } 00:23:56.795 ] 00:23:56.795 }' 00:23:56.795 21:20:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:56.795 21:20:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:56.795 21:20:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:57.054 21:20:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:57.054 21:20:19 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:57.054 [2024-06-07 21:20:19.702540] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:57.054 [2024-06-07 21:20:19.702612] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:57.054 [2024-06-07 21:20:19.706900] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ce10 00:23:57.054 [2024-06-07 21:20:19.709539] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:57.054 21:20:19 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:58.429 21:20:20 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:58.429 21:20:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:58.429 21:20:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:58.429 21:20:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:58.429 21:20:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:58.429 21:20:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.429 21:20:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.429 21:20:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:58.429 "name": "raid_bdev1", 00:23:58.429 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:23:58.429 "strip_size_kb": 64, 00:23:58.429 "state": "online", 00:23:58.429 "raid_level": "raid5f", 00:23:58.429 "superblock": false, 00:23:58.429 "num_base_bdevs": 4, 00:23:58.429 "num_base_bdevs_discovered": 4, 00:23:58.429 "num_base_bdevs_operational": 4, 00:23:58.429 "process": { 00:23:58.429 "type": "rebuild", 00:23:58.429 "target": "spare", 00:23:58.429 "progress": { 00:23:58.429 "blocks": 23040, 00:23:58.429 "percent": 11 00:23:58.429 } 00:23:58.429 }, 00:23:58.429 "base_bdevs_list": [ 00:23:58.429 { 00:23:58.429 "name": "spare", 00:23:58.429 "uuid": "d5ff967e-5210-5cea-9ec7-f4753357e9a2", 00:23:58.429 "is_configured": true, 00:23:58.429 "data_offset": 0, 00:23:58.429 "data_size": 65536 00:23:58.429 }, 00:23:58.429 { 00:23:58.429 "name": "BaseBdev2", 00:23:58.429 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:23:58.429 "is_configured": true, 00:23:58.429 "data_offset": 0, 00:23:58.429 "data_size": 65536 00:23:58.429 }, 00:23:58.429 { 00:23:58.429 "name": "BaseBdev3", 00:23:58.429 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:23:58.429 "is_configured": true, 00:23:58.429 "data_offset": 0, 00:23:58.429 "data_size": 65536 00:23:58.429 }, 00:23:58.429 { 00:23:58.429 "name": "BaseBdev4", 00:23:58.429 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:23:58.429 "is_configured": true, 00:23:58.429 "data_offset": 0, 00:23:58.429 "data_size": 65536 00:23:58.429 } 00:23:58.429 ] 00:23:58.429 }' 00:23:58.429 21:20:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:58.429 21:20:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:58.429 21:20:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:58.429 21:20:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:58.429 21:20:21 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:58.429 21:20:21 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:23:58.429 21:20:21 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:58.429 21:20:21 -- bdev/bdev_raid.sh@657 -- # local timeout=681 00:23:58.429 21:20:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:58.429 21:20:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:58.429 21:20:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:58.429 21:20:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:58.429 21:20:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:58.429 21:20:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:58.430 21:20:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.430 21:20:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.688 21:20:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:58.688 "name": "raid_bdev1", 00:23:58.688 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:23:58.688 "strip_size_kb": 64, 00:23:58.688 "state": "online", 00:23:58.688 "raid_level": "raid5f", 00:23:58.688 "superblock": false, 00:23:58.688 "num_base_bdevs": 4, 00:23:58.688 "num_base_bdevs_discovered": 4, 00:23:58.688 "num_base_bdevs_operational": 4, 00:23:58.688 "process": { 00:23:58.688 "type": "rebuild", 00:23:58.688 "target": "spare", 00:23:58.688 "progress": { 00:23:58.688 "blocks": 28800, 00:23:58.688 "percent": 14 00:23:58.688 } 00:23:58.688 }, 00:23:58.688 "base_bdevs_list": [ 00:23:58.688 { 00:23:58.688 "name": "spare", 00:23:58.688 "uuid": "d5ff967e-5210-5cea-9ec7-f4753357e9a2", 00:23:58.688 "is_configured": true, 00:23:58.688 "data_offset": 0, 00:23:58.688 "data_size": 65536 00:23:58.688 }, 00:23:58.688 { 00:23:58.688 "name": "BaseBdev2", 00:23:58.688 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:23:58.688 "is_configured": true, 00:23:58.688 "data_offset": 0, 00:23:58.688 "data_size": 65536 00:23:58.688 }, 00:23:58.688 { 00:23:58.688 "name": "BaseBdev3", 00:23:58.688 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:23:58.688 "is_configured": true, 00:23:58.688 "data_offset": 0, 00:23:58.688 "data_size": 65536 00:23:58.688 }, 00:23:58.688 { 00:23:58.688 "name": "BaseBdev4", 00:23:58.688 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:23:58.688 "is_configured": true, 00:23:58.688 "data_offset": 0, 00:23:58.688 "data_size": 65536 00:23:58.688 } 00:23:58.688 ] 00:23:58.688 }' 00:23:58.688 21:20:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:58.947 21:20:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:58.947 21:20:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:58.947 21:20:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:58.947 21:20:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:59.883 21:20:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:59.883 21:20:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:59.883 21:20:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:59.883 21:20:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:59.883 21:20:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:59.884 21:20:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:59.884 21:20:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.884 21:20:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.142 21:20:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:00.143 "name": "raid_bdev1", 00:24:00.143 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:24:00.143 "strip_size_kb": 64, 00:24:00.143 "state": "online", 00:24:00.143 "raid_level": "raid5f", 00:24:00.143 "superblock": false, 00:24:00.143 "num_base_bdevs": 4, 00:24:00.143 "num_base_bdevs_discovered": 4, 00:24:00.143 "num_base_bdevs_operational": 4, 00:24:00.143 "process": { 00:24:00.143 "type": "rebuild", 00:24:00.143 "target": "spare", 00:24:00.143 "progress": { 00:24:00.143 "blocks": 55680, 00:24:00.143 "percent": 28 00:24:00.143 } 00:24:00.143 }, 00:24:00.143 "base_bdevs_list": [ 00:24:00.143 { 00:24:00.143 "name": "spare", 00:24:00.143 "uuid": "d5ff967e-5210-5cea-9ec7-f4753357e9a2", 00:24:00.143 "is_configured": true, 00:24:00.143 "data_offset": 0, 00:24:00.143 "data_size": 65536 00:24:00.143 }, 00:24:00.143 { 00:24:00.143 "name": "BaseBdev2", 00:24:00.143 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:24:00.143 "is_configured": true, 00:24:00.143 "data_offset": 0, 00:24:00.143 "data_size": 65536 00:24:00.143 }, 00:24:00.143 { 00:24:00.143 "name": "BaseBdev3", 00:24:00.143 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:24:00.143 "is_configured": true, 00:24:00.143 "data_offset": 0, 00:24:00.143 "data_size": 65536 00:24:00.143 }, 00:24:00.143 { 00:24:00.143 "name": "BaseBdev4", 00:24:00.143 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:24:00.143 "is_configured": true, 00:24:00.143 "data_offset": 0, 00:24:00.143 "data_size": 65536 00:24:00.143 } 00:24:00.143 ] 00:24:00.143 }' 00:24:00.143 21:20:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:00.143 21:20:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:00.143 21:20:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:00.402 21:20:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:00.402 21:20:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:01.337 21:20:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:01.337 21:20:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:01.337 21:20:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:01.337 21:20:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:01.337 21:20:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:01.337 21:20:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:01.337 21:20:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.337 21:20:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.596 21:20:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:01.596 "name": "raid_bdev1", 00:24:01.596 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:24:01.596 "strip_size_kb": 64, 00:24:01.596 "state": "online", 00:24:01.596 "raid_level": "raid5f", 00:24:01.596 "superblock": false, 00:24:01.596 "num_base_bdevs": 4, 00:24:01.596 "num_base_bdevs_discovered": 4, 00:24:01.596 "num_base_bdevs_operational": 4, 00:24:01.596 "process": { 00:24:01.596 "type": "rebuild", 00:24:01.596 "target": "spare", 00:24:01.596 "progress": { 00:24:01.596 "blocks": 82560, 00:24:01.596 "percent": 41 00:24:01.596 } 00:24:01.596 }, 00:24:01.596 "base_bdevs_list": [ 00:24:01.596 { 00:24:01.596 "name": "spare", 00:24:01.596 "uuid": "d5ff967e-5210-5cea-9ec7-f4753357e9a2", 00:24:01.596 "is_configured": true, 00:24:01.596 "data_offset": 0, 00:24:01.596 "data_size": 65536 00:24:01.596 }, 00:24:01.596 { 00:24:01.596 "name": "BaseBdev2", 00:24:01.596 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:24:01.596 "is_configured": true, 00:24:01.596 "data_offset": 0, 00:24:01.596 "data_size": 65536 00:24:01.596 }, 00:24:01.596 { 00:24:01.596 "name": "BaseBdev3", 00:24:01.596 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:24:01.596 "is_configured": true, 00:24:01.596 "data_offset": 0, 00:24:01.596 "data_size": 65536 00:24:01.596 }, 00:24:01.596 { 00:24:01.596 "name": "BaseBdev4", 00:24:01.596 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:24:01.596 "is_configured": true, 00:24:01.596 "data_offset": 0, 00:24:01.596 "data_size": 65536 00:24:01.596 } 00:24:01.596 ] 00:24:01.596 }' 00:24:01.596 21:20:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:01.596 21:20:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:01.596 21:20:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:01.596 21:20:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:01.596 21:20:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:02.973 21:20:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:02.973 21:20:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:02.973 21:20:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:02.973 21:20:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:02.973 21:20:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:02.973 21:20:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:02.973 21:20:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.973 21:20:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.973 21:20:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:02.973 "name": "raid_bdev1", 00:24:02.973 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:24:02.973 "strip_size_kb": 64, 00:24:02.973 "state": "online", 00:24:02.973 "raid_level": "raid5f", 00:24:02.973 "superblock": false, 00:24:02.973 "num_base_bdevs": 4, 00:24:02.973 "num_base_bdevs_discovered": 4, 00:24:02.973 "num_base_bdevs_operational": 4, 00:24:02.973 "process": { 00:24:02.973 "type": "rebuild", 00:24:02.973 "target": "spare", 00:24:02.973 "progress": { 00:24:02.973 "blocks": 109440, 00:24:02.973 "percent": 55 00:24:02.974 } 00:24:02.974 }, 00:24:02.974 "base_bdevs_list": [ 00:24:02.974 { 00:24:02.974 "name": "spare", 00:24:02.974 "uuid": "d5ff967e-5210-5cea-9ec7-f4753357e9a2", 00:24:02.974 "is_configured": true, 00:24:02.974 "data_offset": 0, 00:24:02.974 "data_size": 65536 00:24:02.974 }, 00:24:02.974 { 00:24:02.974 "name": "BaseBdev2", 00:24:02.974 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:24:02.974 "is_configured": true, 00:24:02.974 "data_offset": 0, 00:24:02.974 "data_size": 65536 00:24:02.974 }, 00:24:02.974 { 00:24:02.974 "name": "BaseBdev3", 00:24:02.974 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:24:02.974 "is_configured": true, 00:24:02.974 "data_offset": 0, 00:24:02.974 "data_size": 65536 00:24:02.974 }, 00:24:02.974 { 00:24:02.974 "name": "BaseBdev4", 00:24:02.974 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:24:02.974 "is_configured": true, 00:24:02.974 "data_offset": 0, 00:24:02.974 "data_size": 65536 00:24:02.974 } 00:24:02.974 ] 00:24:02.974 }' 00:24:02.974 21:20:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:02.974 21:20:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:02.974 21:20:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:02.974 21:20:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:02.974 21:20:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:04.350 21:20:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:04.350 21:20:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.350 21:20:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:04.350 21:20:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:04.350 21:20:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:04.350 21:20:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:04.350 21:20:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.350 21:20:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.350 21:20:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:04.350 "name": "raid_bdev1", 00:24:04.350 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:24:04.350 "strip_size_kb": 64, 00:24:04.350 "state": "online", 00:24:04.350 "raid_level": "raid5f", 00:24:04.350 "superblock": false, 00:24:04.350 "num_base_bdevs": 4, 00:24:04.350 "num_base_bdevs_discovered": 4, 00:24:04.350 "num_base_bdevs_operational": 4, 00:24:04.350 "process": { 00:24:04.350 "type": "rebuild", 00:24:04.350 "target": "spare", 00:24:04.350 "progress": { 00:24:04.350 "blocks": 136320, 00:24:04.350 "percent": 69 00:24:04.350 } 00:24:04.350 }, 00:24:04.350 "base_bdevs_list": [ 00:24:04.350 { 00:24:04.350 "name": "spare", 00:24:04.350 "uuid": "d5ff967e-5210-5cea-9ec7-f4753357e9a2", 00:24:04.350 "is_configured": true, 00:24:04.350 "data_offset": 0, 00:24:04.350 "data_size": 65536 00:24:04.350 }, 00:24:04.350 { 00:24:04.350 "name": "BaseBdev2", 00:24:04.350 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:24:04.350 "is_configured": true, 00:24:04.350 "data_offset": 0, 00:24:04.350 "data_size": 65536 00:24:04.350 }, 00:24:04.350 { 00:24:04.350 "name": "BaseBdev3", 00:24:04.350 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:24:04.350 "is_configured": true, 00:24:04.350 "data_offset": 0, 00:24:04.350 "data_size": 65536 00:24:04.350 }, 00:24:04.350 { 00:24:04.350 "name": "BaseBdev4", 00:24:04.350 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:24:04.350 "is_configured": true, 00:24:04.350 "data_offset": 0, 00:24:04.350 "data_size": 65536 00:24:04.350 } 00:24:04.350 ] 00:24:04.350 }' 00:24:04.350 21:20:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:04.350 21:20:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:04.350 21:20:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:04.350 21:20:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.350 21:20:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:05.729 21:20:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:05.729 21:20:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.729 21:20:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:05.729 21:20:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:05.729 21:20:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:05.729 21:20:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:05.729 21:20:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.729 21:20:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.730 21:20:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:05.730 "name": "raid_bdev1", 00:24:05.730 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:24:05.730 "strip_size_kb": 64, 00:24:05.730 "state": "online", 00:24:05.730 "raid_level": "raid5f", 00:24:05.730 "superblock": false, 00:24:05.730 "num_base_bdevs": 4, 00:24:05.730 "num_base_bdevs_discovered": 4, 00:24:05.730 "num_base_bdevs_operational": 4, 00:24:05.730 "process": { 00:24:05.730 "type": "rebuild", 00:24:05.730 "target": "spare", 00:24:05.730 "progress": { 00:24:05.730 "blocks": 161280, 00:24:05.730 "percent": 82 00:24:05.730 } 00:24:05.730 }, 00:24:05.730 "base_bdevs_list": [ 00:24:05.730 { 00:24:05.730 "name": "spare", 00:24:05.730 "uuid": "d5ff967e-5210-5cea-9ec7-f4753357e9a2", 00:24:05.730 "is_configured": true, 00:24:05.730 "data_offset": 0, 00:24:05.730 "data_size": 65536 00:24:05.730 }, 00:24:05.730 { 00:24:05.730 "name": "BaseBdev2", 00:24:05.730 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:24:05.730 "is_configured": true, 00:24:05.730 "data_offset": 0, 00:24:05.730 "data_size": 65536 00:24:05.730 }, 00:24:05.730 { 00:24:05.730 "name": "BaseBdev3", 00:24:05.730 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:24:05.730 "is_configured": true, 00:24:05.730 "data_offset": 0, 00:24:05.730 "data_size": 65536 00:24:05.730 }, 00:24:05.730 { 00:24:05.730 "name": "BaseBdev4", 00:24:05.730 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:24:05.730 "is_configured": true, 00:24:05.730 "data_offset": 0, 00:24:05.730 "data_size": 65536 00:24:05.730 } 00:24:05.730 ] 00:24:05.730 }' 00:24:05.730 21:20:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:05.730 21:20:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.730 21:20:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:05.730 21:20:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.730 21:20:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:06.701 21:20:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:06.701 21:20:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:06.701 21:20:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:06.701 21:20:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:06.701 21:20:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:06.701 21:20:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:06.701 21:20:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.701 21:20:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.960 21:20:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:06.960 "name": "raid_bdev1", 00:24:06.960 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:24:06.960 "strip_size_kb": 64, 00:24:06.960 "state": "online", 00:24:06.960 "raid_level": "raid5f", 00:24:06.960 "superblock": false, 00:24:06.960 "num_base_bdevs": 4, 00:24:06.960 "num_base_bdevs_discovered": 4, 00:24:06.960 "num_base_bdevs_operational": 4, 00:24:06.960 "process": { 00:24:06.960 "type": "rebuild", 00:24:06.960 "target": "spare", 00:24:06.960 "progress": { 00:24:06.961 "blocks": 188160, 00:24:06.961 "percent": 95 00:24:06.961 } 00:24:06.961 }, 00:24:06.961 "base_bdevs_list": [ 00:24:06.961 { 00:24:06.961 "name": "spare", 00:24:06.961 "uuid": "d5ff967e-5210-5cea-9ec7-f4753357e9a2", 00:24:06.961 "is_configured": true, 00:24:06.961 "data_offset": 0, 00:24:06.961 "data_size": 65536 00:24:06.961 }, 00:24:06.961 { 00:24:06.961 "name": "BaseBdev2", 00:24:06.961 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:24:06.961 "is_configured": true, 00:24:06.961 "data_offset": 0, 00:24:06.961 "data_size": 65536 00:24:06.961 }, 00:24:06.961 { 00:24:06.961 "name": "BaseBdev3", 00:24:06.961 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:24:06.961 "is_configured": true, 00:24:06.961 "data_offset": 0, 00:24:06.961 "data_size": 65536 00:24:06.961 }, 00:24:06.961 { 00:24:06.961 "name": "BaseBdev4", 00:24:06.961 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:24:06.961 "is_configured": true, 00:24:06.961 "data_offset": 0, 00:24:06.961 "data_size": 65536 00:24:06.961 } 00:24:06.961 ] 00:24:06.961 }' 00:24:06.961 21:20:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:07.220 21:20:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.220 21:20:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:07.220 21:20:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.220 21:20:29 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:07.479 [2024-06-07 21:20:30.095313] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:07.479 [2024-06-07 21:20:30.095407] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:07.479 [2024-06-07 21:20:30.095514] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.415 21:20:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:08.415 21:20:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.415 21:20:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:08.415 21:20:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:08.415 21:20:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:08.415 21:20:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:08.415 21:20:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.415 21:20:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.415 21:20:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:08.415 "name": "raid_bdev1", 00:24:08.415 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:24:08.415 "strip_size_kb": 64, 00:24:08.415 "state": "online", 00:24:08.415 "raid_level": "raid5f", 00:24:08.415 "superblock": false, 00:24:08.415 "num_base_bdevs": 4, 00:24:08.415 "num_base_bdevs_discovered": 4, 00:24:08.415 "num_base_bdevs_operational": 4, 00:24:08.415 "base_bdevs_list": [ 00:24:08.415 { 00:24:08.415 "name": "spare", 00:24:08.415 "uuid": "d5ff967e-5210-5cea-9ec7-f4753357e9a2", 00:24:08.415 "is_configured": true, 00:24:08.415 "data_offset": 0, 00:24:08.415 "data_size": 65536 00:24:08.415 }, 00:24:08.415 { 00:24:08.415 "name": "BaseBdev2", 00:24:08.415 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:24:08.415 "is_configured": true, 00:24:08.415 "data_offset": 0, 00:24:08.415 "data_size": 65536 00:24:08.415 }, 00:24:08.415 { 00:24:08.415 "name": "BaseBdev3", 00:24:08.415 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:24:08.415 "is_configured": true, 00:24:08.415 "data_offset": 0, 00:24:08.415 "data_size": 65536 00:24:08.415 }, 00:24:08.415 { 00:24:08.415 "name": "BaseBdev4", 00:24:08.415 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:24:08.415 "is_configured": true, 00:24:08.415 "data_offset": 0, 00:24:08.415 "data_size": 65536 00:24:08.415 } 00:24:08.415 ] 00:24:08.415 }' 00:24:08.415 21:20:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:08.415 21:20:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:08.415 21:20:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:08.674 21:20:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:08.674 21:20:31 -- bdev/bdev_raid.sh@660 -- # break 00:24:08.674 21:20:31 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:08.674 21:20:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:08.674 21:20:31 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:08.674 21:20:31 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:08.674 21:20:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:08.674 21:20:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.674 21:20:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:08.933 "name": "raid_bdev1", 00:24:08.933 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:24:08.933 "strip_size_kb": 64, 00:24:08.933 "state": "online", 00:24:08.933 "raid_level": "raid5f", 00:24:08.933 "superblock": false, 00:24:08.933 "num_base_bdevs": 4, 00:24:08.933 "num_base_bdevs_discovered": 4, 00:24:08.933 "num_base_bdevs_operational": 4, 00:24:08.933 "base_bdevs_list": [ 00:24:08.933 { 00:24:08.933 "name": "spare", 00:24:08.933 "uuid": "d5ff967e-5210-5cea-9ec7-f4753357e9a2", 00:24:08.933 "is_configured": true, 00:24:08.933 "data_offset": 0, 00:24:08.933 "data_size": 65536 00:24:08.933 }, 00:24:08.933 { 00:24:08.933 "name": "BaseBdev2", 00:24:08.933 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:24:08.933 "is_configured": true, 00:24:08.933 "data_offset": 0, 00:24:08.933 "data_size": 65536 00:24:08.933 }, 00:24:08.933 { 00:24:08.933 "name": "BaseBdev3", 00:24:08.933 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:24:08.933 "is_configured": true, 00:24:08.933 "data_offset": 0, 00:24:08.933 "data_size": 65536 00:24:08.933 }, 00:24:08.933 { 00:24:08.933 "name": "BaseBdev4", 00:24:08.933 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:24:08.933 "is_configured": true, 00:24:08.933 "data_offset": 0, 00:24:08.933 "data_size": 65536 00:24:08.933 } 00:24:08.933 ] 00:24:08.933 }' 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.933 21:20:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.192 21:20:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:09.192 "name": "raid_bdev1", 00:24:09.192 "uuid": "dbd1c471-fda7-43ea-92eb-52f2bd5b08c8", 00:24:09.192 "strip_size_kb": 64, 00:24:09.192 "state": "online", 00:24:09.192 "raid_level": "raid5f", 00:24:09.192 "superblock": false, 00:24:09.192 "num_base_bdevs": 4, 00:24:09.192 "num_base_bdevs_discovered": 4, 00:24:09.192 "num_base_bdevs_operational": 4, 00:24:09.192 "base_bdevs_list": [ 00:24:09.192 { 00:24:09.192 "name": "spare", 00:24:09.192 "uuid": "d5ff967e-5210-5cea-9ec7-f4753357e9a2", 00:24:09.192 "is_configured": true, 00:24:09.192 "data_offset": 0, 00:24:09.192 "data_size": 65536 00:24:09.192 }, 00:24:09.192 { 00:24:09.192 "name": "BaseBdev2", 00:24:09.192 "uuid": "c7c171ef-7192-4b73-a2be-5e36dd0d410f", 00:24:09.192 "is_configured": true, 00:24:09.192 "data_offset": 0, 00:24:09.192 "data_size": 65536 00:24:09.192 }, 00:24:09.192 { 00:24:09.192 "name": "BaseBdev3", 00:24:09.192 "uuid": "216380b1-2162-4921-9da3-20715f24813f", 00:24:09.192 "is_configured": true, 00:24:09.192 "data_offset": 0, 00:24:09.192 "data_size": 65536 00:24:09.192 }, 00:24:09.192 { 00:24:09.192 "name": "BaseBdev4", 00:24:09.192 "uuid": "252918ba-75ff-4e9b-804f-d9c0895885fd", 00:24:09.192 "is_configured": true, 00:24:09.192 "data_offset": 0, 00:24:09.192 "data_size": 65536 00:24:09.192 } 00:24:09.192 ] 00:24:09.192 }' 00:24:09.192 21:20:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:09.192 21:20:31 -- common/autotest_common.sh@10 -- # set +x 00:24:10.128 21:20:32 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:10.128 [2024-06-07 21:20:32.739073] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:10.128 [2024-06-07 21:20:32.739116] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:10.129 [2024-06-07 21:20:32.739266] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:10.129 [2024-06-07 21:20:32.739414] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:10.129 [2024-06-07 21:20:32.739446] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:24:10.129 21:20:32 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.129 21:20:32 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:10.387 21:20:32 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:10.387 21:20:32 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:10.387 21:20:32 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:10.387 21:20:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:10.387 21:20:32 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:10.387 21:20:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:10.387 21:20:32 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:10.387 21:20:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:10.387 21:20:32 -- bdev/nbd_common.sh@12 -- # local i 00:24:10.387 21:20:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:10.387 21:20:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:10.387 21:20:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:10.646 /dev/nbd0 00:24:10.646 21:20:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:10.646 21:20:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:10.646 21:20:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:10.646 21:20:33 -- common/autotest_common.sh@857 -- # local i 00:24:10.646 21:20:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:10.646 21:20:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:10.646 21:20:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:10.646 21:20:33 -- common/autotest_common.sh@861 -- # break 00:24:10.646 21:20:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:10.646 21:20:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:10.646 21:20:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:10.646 1+0 records in 00:24:10.646 1+0 records out 00:24:10.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501029 s, 8.2 MB/s 00:24:10.646 21:20:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.646 21:20:33 -- common/autotest_common.sh@874 -- # size=4096 00:24:10.646 21:20:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.646 21:20:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:10.646 21:20:33 -- common/autotest_common.sh@877 -- # return 0 00:24:10.646 21:20:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:10.646 21:20:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:10.646 21:20:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:10.904 /dev/nbd1 00:24:10.904 21:20:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:10.904 21:20:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:10.904 21:20:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:10.904 21:20:33 -- common/autotest_common.sh@857 -- # local i 00:24:10.904 21:20:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:10.904 21:20:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:10.904 21:20:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:10.904 21:20:33 -- common/autotest_common.sh@861 -- # break 00:24:10.904 21:20:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:10.904 21:20:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:10.904 21:20:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:10.904 1+0 records in 00:24:10.904 1+0 records out 00:24:10.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618469 s, 6.6 MB/s 00:24:10.904 21:20:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.904 21:20:33 -- common/autotest_common.sh@874 -- # size=4096 00:24:10.904 21:20:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.904 21:20:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:10.904 21:20:33 -- common/autotest_common.sh@877 -- # return 0 00:24:10.904 21:20:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:10.904 21:20:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:10.904 21:20:33 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:11.162 21:20:33 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:11.162 21:20:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:11.162 21:20:33 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:11.162 21:20:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:11.162 21:20:33 -- bdev/nbd_common.sh@51 -- # local i 00:24:11.162 21:20:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:11.162 21:20:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:11.420 21:20:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:11.420 21:20:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:11.420 21:20:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:11.420 21:20:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:11.420 21:20:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:11.420 21:20:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:11.420 21:20:33 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:11.420 21:20:34 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:11.420 21:20:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:11.420 21:20:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:11.420 21:20:34 -- bdev/nbd_common.sh@41 -- # break 00:24:11.420 21:20:34 -- bdev/nbd_common.sh@45 -- # return 0 00:24:11.420 21:20:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:11.420 21:20:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:11.678 21:20:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:11.678 21:20:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:11.678 21:20:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:11.678 21:20:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:11.678 21:20:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:11.678 21:20:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:11.678 21:20:34 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:11.937 21:20:34 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:11.937 21:20:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:11.937 21:20:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:11.937 21:20:34 -- bdev/nbd_common.sh@41 -- # break 00:24:11.937 21:20:34 -- bdev/nbd_common.sh@45 -- # return 0 00:24:11.937 21:20:34 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:24:11.937 21:20:34 -- bdev/bdev_raid.sh@709 -- # killprocess 145011 00:24:11.937 21:20:34 -- common/autotest_common.sh@926 -- # '[' -z 145011 ']' 00:24:11.937 21:20:34 -- common/autotest_common.sh@930 -- # kill -0 145011 00:24:11.937 21:20:34 -- common/autotest_common.sh@931 -- # uname 00:24:11.937 21:20:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:11.937 21:20:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145011 00:24:11.937 21:20:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:11.937 21:20:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:11.937 21:20:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145011' 00:24:11.937 killing process with pid 145011 00:24:11.937 21:20:34 -- common/autotest_common.sh@945 -- # kill 145011 00:24:11.937 21:20:34 -- common/autotest_common.sh@950 -- # wait 145011 00:24:11.937 Received shutdown signal, test time was about 60.000000 seconds 00:24:11.937 00:24:11.937 Latency(us) 00:24:11.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.937 =================================================================================================================== 00:24:11.937 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:11.937 [2024-06-07 21:20:34.421829] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:11.937 [2024-06-07 21:20:34.468525] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:12.196 00:24:12.196 real 0m24.464s 00:24:12.196 user 0m36.753s 00:24:12.196 sys 0m2.538s 00:24:12.196 21:20:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.196 21:20:34 -- common/autotest_common.sh@10 -- # set +x 00:24:12.196 ************************************ 00:24:12.196 END TEST raid5f_rebuild_test 00:24:12.196 ************************************ 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:24:12.196 21:20:34 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:12.196 21:20:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:12.196 21:20:34 -- common/autotest_common.sh@10 -- # set +x 00:24:12.196 ************************************ 00:24:12.196 START TEST raid5f_rebuild_test_sb 00:24:12.196 ************************************ 00:24:12.196 21:20:34 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 true false 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@544 -- # raid_pid=145675 00:24:12.196 21:20:34 -- bdev/bdev_raid.sh@545 -- # waitforlisten 145675 /var/tmp/spdk-raid.sock 00:24:12.196 21:20:34 -- common/autotest_common.sh@819 -- # '[' -z 145675 ']' 00:24:12.196 21:20:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:12.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:12.197 21:20:34 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:12.197 21:20:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:12.197 21:20:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:12.197 21:20:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:12.197 21:20:34 -- common/autotest_common.sh@10 -- # set +x 00:24:12.197 [2024-06-07 21:20:34.835933] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:12.197 [2024-06-07 21:20:34.836155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145675 ] 00:24:12.197 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:12.197 Zero copy mechanism will not be used. 00:24:12.456 [2024-06-07 21:20:34.999611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.456 [2024-06-07 21:20:35.083041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.714 [2024-06-07 21:20:35.145967] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:13.281 21:20:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:13.281 21:20:35 -- common/autotest_common.sh@852 -- # return 0 00:24:13.281 21:20:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:13.281 21:20:35 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:13.281 21:20:35 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:13.540 BaseBdev1_malloc 00:24:13.540 21:20:36 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:13.799 [2024-06-07 21:20:36.278809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:13.799 [2024-06-07 21:20:36.278972] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.799 [2024-06-07 21:20:36.279015] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:13.799 [2024-06-07 21:20:36.279112] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.799 [2024-06-07 21:20:36.282052] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.799 [2024-06-07 21:20:36.282123] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:13.799 BaseBdev1 00:24:13.799 21:20:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:13.799 21:20:36 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:13.799 21:20:36 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:14.058 BaseBdev2_malloc 00:24:14.058 21:20:36 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:14.058 [2024-06-07 21:20:36.715134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:14.058 [2024-06-07 21:20:36.715256] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:14.058 [2024-06-07 21:20:36.715309] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:14.058 [2024-06-07 21:20:36.715372] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:14.058 [2024-06-07 21:20:36.717901] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:14.058 [2024-06-07 21:20:36.717966] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:14.058 BaseBdev2 00:24:14.058 21:20:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:14.058 21:20:36 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:14.058 21:20:36 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:14.317 BaseBdev3_malloc 00:24:14.317 21:20:36 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:14.576 [2024-06-07 21:20:37.145565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:14.576 [2024-06-07 21:20:37.145686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:14.576 [2024-06-07 21:20:37.145732] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:14.576 [2024-06-07 21:20:37.145797] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:14.576 [2024-06-07 21:20:37.148179] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:14.576 [2024-06-07 21:20:37.148262] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:14.576 BaseBdev3 00:24:14.576 21:20:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:14.576 21:20:37 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:14.576 21:20:37 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:14.835 BaseBdev4_malloc 00:24:14.835 21:20:37 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:15.103 [2024-06-07 21:20:37.585371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:15.103 [2024-06-07 21:20:37.585533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.103 [2024-06-07 21:20:37.585590] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:15.103 [2024-06-07 21:20:37.585676] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.103 [2024-06-07 21:20:37.588267] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.103 [2024-06-07 21:20:37.588336] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:15.103 BaseBdev4 00:24:15.103 21:20:37 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:15.377 spare_malloc 00:24:15.377 21:20:37 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:15.377 spare_delay 00:24:15.377 21:20:38 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:15.740 [2024-06-07 21:20:38.209253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:15.740 [2024-06-07 21:20:38.209354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.740 [2024-06-07 21:20:38.209421] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:15.740 [2024-06-07 21:20:38.209479] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.740 [2024-06-07 21:20:38.212189] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.740 [2024-06-07 21:20:38.212274] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:15.740 spare 00:24:15.740 21:20:38 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:15.997 [2024-06-07 21:20:38.457478] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:15.997 [2024-06-07 21:20:38.459688] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:15.997 [2024-06-07 21:20:38.459810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:15.997 [2024-06-07 21:20:38.459867] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:15.997 [2024-06-07 21:20:38.460190] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:24:15.997 [2024-06-07 21:20:38.460231] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:15.997 [2024-06-07 21:20:38.460390] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:24:15.997 [2024-06-07 21:20:38.461325] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:24:15.997 [2024-06-07 21:20:38.461350] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:24:15.997 [2024-06-07 21:20:38.461597] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:15.997 21:20:38 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:15.997 21:20:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:15.997 21:20:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:15.997 21:20:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:15.997 21:20:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:15.997 21:20:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:15.997 21:20:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:15.997 21:20:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:15.997 21:20:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:15.997 21:20:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:15.997 21:20:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.997 21:20:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.254 21:20:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:16.254 "name": "raid_bdev1", 00:24:16.254 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:16.254 "strip_size_kb": 64, 00:24:16.254 "state": "online", 00:24:16.254 "raid_level": "raid5f", 00:24:16.254 "superblock": true, 00:24:16.254 "num_base_bdevs": 4, 00:24:16.254 "num_base_bdevs_discovered": 4, 00:24:16.254 "num_base_bdevs_operational": 4, 00:24:16.254 "base_bdevs_list": [ 00:24:16.254 { 00:24:16.254 "name": "BaseBdev1", 00:24:16.254 "uuid": "8eebafd2-740e-5cbc-bca2-5c22c2cc6fb5", 00:24:16.254 "is_configured": true, 00:24:16.254 "data_offset": 2048, 00:24:16.254 "data_size": 63488 00:24:16.254 }, 00:24:16.254 { 00:24:16.254 "name": "BaseBdev2", 00:24:16.254 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:16.254 "is_configured": true, 00:24:16.254 "data_offset": 2048, 00:24:16.254 "data_size": 63488 00:24:16.254 }, 00:24:16.254 { 00:24:16.254 "name": "BaseBdev3", 00:24:16.254 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:16.254 "is_configured": true, 00:24:16.254 "data_offset": 2048, 00:24:16.254 "data_size": 63488 00:24:16.254 }, 00:24:16.254 { 00:24:16.254 "name": "BaseBdev4", 00:24:16.254 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:16.254 "is_configured": true, 00:24:16.254 "data_offset": 2048, 00:24:16.254 "data_size": 63488 00:24:16.254 } 00:24:16.254 ] 00:24:16.254 }' 00:24:16.254 21:20:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:16.254 21:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:16.820 21:20:39 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:16.820 21:20:39 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:17.078 [2024-06-07 21:20:39.526020] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:17.078 21:20:39 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:24:17.078 21:20:39 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.078 21:20:39 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:17.078 21:20:39 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:17.078 21:20:39 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:17.078 21:20:39 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:17.078 21:20:39 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:17.078 21:20:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:17.078 21:20:39 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:17.078 21:20:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:17.078 21:20:39 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:17.078 21:20:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:17.078 21:20:39 -- bdev/nbd_common.sh@12 -- # local i 00:24:17.078 21:20:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:17.078 21:20:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:17.078 21:20:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:17.335 [2024-06-07 21:20:39.973982] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:17.335 /dev/nbd0 00:24:17.592 21:20:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:17.592 21:20:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:17.592 21:20:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:17.592 21:20:40 -- common/autotest_common.sh@857 -- # local i 00:24:17.592 21:20:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:17.592 21:20:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:17.592 21:20:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:17.592 21:20:40 -- common/autotest_common.sh@861 -- # break 00:24:17.592 21:20:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:17.592 21:20:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:17.592 21:20:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:17.592 1+0 records in 00:24:17.592 1+0 records out 00:24:17.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270144 s, 15.2 MB/s 00:24:17.592 21:20:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:17.592 21:20:40 -- common/autotest_common.sh@874 -- # size=4096 00:24:17.592 21:20:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:17.592 21:20:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:17.592 21:20:40 -- common/autotest_common.sh@877 -- # return 0 00:24:17.592 21:20:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:17.592 21:20:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:17.592 21:20:40 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:17.592 21:20:40 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:24:17.592 21:20:40 -- bdev/bdev_raid.sh@582 -- # echo 192 00:24:17.592 21:20:40 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:24:18.158 496+0 records in 00:24:18.158 496+0 records out 00:24:18.158 97517568 bytes (98 MB, 93 MiB) copied, 0.490904 s, 199 MB/s 00:24:18.158 21:20:40 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:18.158 21:20:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:18.158 21:20:40 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:18.158 21:20:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:18.158 21:20:40 -- bdev/nbd_common.sh@51 -- # local i 00:24:18.158 21:20:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:18.158 21:20:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:18.158 21:20:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:18.158 21:20:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:18.158 21:20:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:18.158 21:20:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:18.158 21:20:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:18.158 21:20:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:18.158 21:20:40 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:18.158 [2024-06-07 21:20:40.735706] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:18.416 21:20:40 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:18.416 21:20:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:18.416 21:20:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:18.416 21:20:40 -- bdev/nbd_common.sh@41 -- # break 00:24:18.416 21:20:40 -- bdev/nbd_common.sh@45 -- # return 0 00:24:18.416 21:20:40 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:18.416 [2024-06-07 21:20:41.091385] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:18.674 21:20:41 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:18.674 21:20:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:18.674 21:20:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:18.674 21:20:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:18.674 21:20:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:18.674 21:20:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:18.674 21:20:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:18.674 21:20:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:18.674 21:20:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:18.674 21:20:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:18.674 21:20:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.674 21:20:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.933 21:20:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:18.933 "name": "raid_bdev1", 00:24:18.933 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:18.933 "strip_size_kb": 64, 00:24:18.933 "state": "online", 00:24:18.933 "raid_level": "raid5f", 00:24:18.933 "superblock": true, 00:24:18.933 "num_base_bdevs": 4, 00:24:18.933 "num_base_bdevs_discovered": 3, 00:24:18.933 "num_base_bdevs_operational": 3, 00:24:18.933 "base_bdevs_list": [ 00:24:18.933 { 00:24:18.933 "name": null, 00:24:18.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.933 "is_configured": false, 00:24:18.933 "data_offset": 2048, 00:24:18.933 "data_size": 63488 00:24:18.933 }, 00:24:18.933 { 00:24:18.933 "name": "BaseBdev2", 00:24:18.933 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:18.933 "is_configured": true, 00:24:18.933 "data_offset": 2048, 00:24:18.933 "data_size": 63488 00:24:18.933 }, 00:24:18.933 { 00:24:18.933 "name": "BaseBdev3", 00:24:18.933 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:18.933 "is_configured": true, 00:24:18.933 "data_offset": 2048, 00:24:18.933 "data_size": 63488 00:24:18.933 }, 00:24:18.933 { 00:24:18.933 "name": "BaseBdev4", 00:24:18.933 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:18.933 "is_configured": true, 00:24:18.933 "data_offset": 2048, 00:24:18.933 "data_size": 63488 00:24:18.933 } 00:24:18.933 ] 00:24:18.933 }' 00:24:18.933 21:20:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:18.933 21:20:41 -- common/autotest_common.sh@10 -- # set +x 00:24:19.499 21:20:42 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:19.756 [2024-06-07 21:20:42.295616] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:19.756 [2024-06-07 21:20:42.295679] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:19.756 [2024-06-07 21:20:42.300225] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002c860 00:24:19.756 [2024-06-07 21:20:42.302999] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:19.756 21:20:42 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:20.689 21:20:43 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:20.689 21:20:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:20.689 21:20:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:20.689 21:20:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:20.689 21:20:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:20.689 21:20:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.689 21:20:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.946 21:20:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:20.946 "name": "raid_bdev1", 00:24:20.946 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:20.946 "strip_size_kb": 64, 00:24:20.946 "state": "online", 00:24:20.946 "raid_level": "raid5f", 00:24:20.946 "superblock": true, 00:24:20.946 "num_base_bdevs": 4, 00:24:20.946 "num_base_bdevs_discovered": 4, 00:24:20.946 "num_base_bdevs_operational": 4, 00:24:20.946 "process": { 00:24:20.946 "type": "rebuild", 00:24:20.946 "target": "spare", 00:24:20.946 "progress": { 00:24:20.946 "blocks": 23040, 00:24:20.946 "percent": 12 00:24:20.946 } 00:24:20.946 }, 00:24:20.946 "base_bdevs_list": [ 00:24:20.946 { 00:24:20.946 "name": "spare", 00:24:20.946 "uuid": "4d78b1ec-ff8d-5ea2-9150-c78c23f59b71", 00:24:20.946 "is_configured": true, 00:24:20.946 "data_offset": 2048, 00:24:20.946 "data_size": 63488 00:24:20.946 }, 00:24:20.946 { 00:24:20.946 "name": "BaseBdev2", 00:24:20.946 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:20.946 "is_configured": true, 00:24:20.946 "data_offset": 2048, 00:24:20.946 "data_size": 63488 00:24:20.946 }, 00:24:20.946 { 00:24:20.946 "name": "BaseBdev3", 00:24:20.946 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:20.946 "is_configured": true, 00:24:20.946 "data_offset": 2048, 00:24:20.946 "data_size": 63488 00:24:20.946 }, 00:24:20.946 { 00:24:20.946 "name": "BaseBdev4", 00:24:20.946 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:20.946 "is_configured": true, 00:24:20.946 "data_offset": 2048, 00:24:20.946 "data_size": 63488 00:24:20.946 } 00:24:20.946 ] 00:24:20.946 }' 00:24:20.946 21:20:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:20.946 21:20:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:20.946 21:20:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:21.204 21:20:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:21.205 21:20:43 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:21.463 [2024-06-07 21:20:43.882240] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:21.463 [2024-06-07 21:20:43.917254] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:21.463 [2024-06-07 21:20:43.917802] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:21.463 21:20:43 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:21.463 21:20:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:21.463 21:20:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:21.463 21:20:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:21.463 21:20:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:21.463 21:20:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:21.463 21:20:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:21.463 21:20:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:21.463 21:20:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:21.463 21:20:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:21.463 21:20:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.463 21:20:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.722 21:20:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:21.722 "name": "raid_bdev1", 00:24:21.722 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:21.722 "strip_size_kb": 64, 00:24:21.722 "state": "online", 00:24:21.722 "raid_level": "raid5f", 00:24:21.722 "superblock": true, 00:24:21.722 "num_base_bdevs": 4, 00:24:21.722 "num_base_bdevs_discovered": 3, 00:24:21.722 "num_base_bdevs_operational": 3, 00:24:21.722 "base_bdevs_list": [ 00:24:21.722 { 00:24:21.722 "name": null, 00:24:21.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.722 "is_configured": false, 00:24:21.722 "data_offset": 2048, 00:24:21.722 "data_size": 63488 00:24:21.722 }, 00:24:21.722 { 00:24:21.722 "name": "BaseBdev2", 00:24:21.722 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:21.722 "is_configured": true, 00:24:21.722 "data_offset": 2048, 00:24:21.722 "data_size": 63488 00:24:21.722 }, 00:24:21.722 { 00:24:21.722 "name": "BaseBdev3", 00:24:21.722 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:21.722 "is_configured": true, 00:24:21.722 "data_offset": 2048, 00:24:21.722 "data_size": 63488 00:24:21.722 }, 00:24:21.722 { 00:24:21.722 "name": "BaseBdev4", 00:24:21.722 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:21.722 "is_configured": true, 00:24:21.722 "data_offset": 2048, 00:24:21.722 "data_size": 63488 00:24:21.722 } 00:24:21.722 ] 00:24:21.722 }' 00:24:21.722 21:20:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:21.722 21:20:44 -- common/autotest_common.sh@10 -- # set +x 00:24:22.288 21:20:44 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:22.288 21:20:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:22.288 21:20:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:22.288 21:20:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:22.288 21:20:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:22.288 21:20:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.288 21:20:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.546 21:20:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:22.546 "name": "raid_bdev1", 00:24:22.546 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:22.546 "strip_size_kb": 64, 00:24:22.546 "state": "online", 00:24:22.546 "raid_level": "raid5f", 00:24:22.546 "superblock": true, 00:24:22.546 "num_base_bdevs": 4, 00:24:22.546 "num_base_bdevs_discovered": 3, 00:24:22.546 "num_base_bdevs_operational": 3, 00:24:22.546 "base_bdevs_list": [ 00:24:22.546 { 00:24:22.546 "name": null, 00:24:22.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.546 "is_configured": false, 00:24:22.546 "data_offset": 2048, 00:24:22.546 "data_size": 63488 00:24:22.546 }, 00:24:22.546 { 00:24:22.546 "name": "BaseBdev2", 00:24:22.546 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:22.546 "is_configured": true, 00:24:22.546 "data_offset": 2048, 00:24:22.546 "data_size": 63488 00:24:22.546 }, 00:24:22.546 { 00:24:22.546 "name": "BaseBdev3", 00:24:22.546 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:22.546 "is_configured": true, 00:24:22.546 "data_offset": 2048, 00:24:22.546 "data_size": 63488 00:24:22.546 }, 00:24:22.546 { 00:24:22.546 "name": "BaseBdev4", 00:24:22.546 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:22.546 "is_configured": true, 00:24:22.546 "data_offset": 2048, 00:24:22.546 "data_size": 63488 00:24:22.547 } 00:24:22.547 ] 00:24:22.547 }' 00:24:22.547 21:20:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:22.547 21:20:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:22.547 21:20:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:22.805 21:20:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:22.805 21:20:45 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:23.063 [2024-06-07 21:20:45.488638] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:23.063 [2024-06-07 21:20:45.488707] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:23.063 [2024-06-07 21:20:45.493090] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ca00 00:24:23.063 [2024-06-07 21:20:45.495582] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:23.063 21:20:45 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:23.998 21:20:46 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:23.998 21:20:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:23.998 21:20:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:23.998 21:20:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:23.998 21:20:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:23.998 21:20:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.998 21:20:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.256 21:20:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:24.256 "name": "raid_bdev1", 00:24:24.256 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:24.256 "strip_size_kb": 64, 00:24:24.256 "state": "online", 00:24:24.256 "raid_level": "raid5f", 00:24:24.256 "superblock": true, 00:24:24.256 "num_base_bdevs": 4, 00:24:24.256 "num_base_bdevs_discovered": 4, 00:24:24.256 "num_base_bdevs_operational": 4, 00:24:24.256 "process": { 00:24:24.256 "type": "rebuild", 00:24:24.256 "target": "spare", 00:24:24.256 "progress": { 00:24:24.256 "blocks": 21120, 00:24:24.256 "percent": 11 00:24:24.256 } 00:24:24.257 }, 00:24:24.257 "base_bdevs_list": [ 00:24:24.257 { 00:24:24.257 "name": "spare", 00:24:24.257 "uuid": "4d78b1ec-ff8d-5ea2-9150-c78c23f59b71", 00:24:24.257 "is_configured": true, 00:24:24.257 "data_offset": 2048, 00:24:24.257 "data_size": 63488 00:24:24.257 }, 00:24:24.257 { 00:24:24.257 "name": "BaseBdev2", 00:24:24.257 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:24.257 "is_configured": true, 00:24:24.257 "data_offset": 2048, 00:24:24.257 "data_size": 63488 00:24:24.257 }, 00:24:24.257 { 00:24:24.257 "name": "BaseBdev3", 00:24:24.257 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:24.257 "is_configured": true, 00:24:24.257 "data_offset": 2048, 00:24:24.257 "data_size": 63488 00:24:24.257 }, 00:24:24.257 { 00:24:24.257 "name": "BaseBdev4", 00:24:24.257 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:24.257 "is_configured": true, 00:24:24.257 "data_offset": 2048, 00:24:24.257 "data_size": 63488 00:24:24.257 } 00:24:24.257 ] 00:24:24.257 }' 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:24.257 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@657 -- # local timeout=706 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.257 21:20:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.515 21:20:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:24.515 "name": "raid_bdev1", 00:24:24.515 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:24.515 "strip_size_kb": 64, 00:24:24.515 "state": "online", 00:24:24.515 "raid_level": "raid5f", 00:24:24.515 "superblock": true, 00:24:24.515 "num_base_bdevs": 4, 00:24:24.515 "num_base_bdevs_discovered": 4, 00:24:24.515 "num_base_bdevs_operational": 4, 00:24:24.515 "process": { 00:24:24.515 "type": "rebuild", 00:24:24.515 "target": "spare", 00:24:24.515 "progress": { 00:24:24.515 "blocks": 30720, 00:24:24.515 "percent": 16 00:24:24.515 } 00:24:24.515 }, 00:24:24.515 "base_bdevs_list": [ 00:24:24.515 { 00:24:24.515 "name": "spare", 00:24:24.515 "uuid": "4d78b1ec-ff8d-5ea2-9150-c78c23f59b71", 00:24:24.515 "is_configured": true, 00:24:24.515 "data_offset": 2048, 00:24:24.515 "data_size": 63488 00:24:24.515 }, 00:24:24.515 { 00:24:24.516 "name": "BaseBdev2", 00:24:24.516 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:24.516 "is_configured": true, 00:24:24.516 "data_offset": 2048, 00:24:24.516 "data_size": 63488 00:24:24.516 }, 00:24:24.516 { 00:24:24.516 "name": "BaseBdev3", 00:24:24.516 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:24.516 "is_configured": true, 00:24:24.516 "data_offset": 2048, 00:24:24.516 "data_size": 63488 00:24:24.516 }, 00:24:24.516 { 00:24:24.516 "name": "BaseBdev4", 00:24:24.516 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:24.516 "is_configured": true, 00:24:24.516 "data_offset": 2048, 00:24:24.516 "data_size": 63488 00:24:24.516 } 00:24:24.516 ] 00:24:24.516 }' 00:24:24.516 21:20:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:24.774 21:20:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:24.774 21:20:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:24.774 21:20:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:24.774 21:20:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:25.710 21:20:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:25.710 21:20:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:25.710 21:20:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:25.710 21:20:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:25.710 21:20:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:25.710 21:20:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:25.710 21:20:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.710 21:20:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.969 21:20:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:25.969 "name": "raid_bdev1", 00:24:25.969 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:25.969 "strip_size_kb": 64, 00:24:25.969 "state": "online", 00:24:25.969 "raid_level": "raid5f", 00:24:25.969 "superblock": true, 00:24:25.969 "num_base_bdevs": 4, 00:24:25.969 "num_base_bdevs_discovered": 4, 00:24:25.969 "num_base_bdevs_operational": 4, 00:24:25.969 "process": { 00:24:25.969 "type": "rebuild", 00:24:25.969 "target": "spare", 00:24:25.969 "progress": { 00:24:25.969 "blocks": 57600, 00:24:25.969 "percent": 30 00:24:25.969 } 00:24:25.969 }, 00:24:25.969 "base_bdevs_list": [ 00:24:25.969 { 00:24:25.969 "name": "spare", 00:24:25.969 "uuid": "4d78b1ec-ff8d-5ea2-9150-c78c23f59b71", 00:24:25.969 "is_configured": true, 00:24:25.969 "data_offset": 2048, 00:24:25.969 "data_size": 63488 00:24:25.969 }, 00:24:25.969 { 00:24:25.969 "name": "BaseBdev2", 00:24:25.969 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:25.969 "is_configured": true, 00:24:25.969 "data_offset": 2048, 00:24:25.969 "data_size": 63488 00:24:25.969 }, 00:24:25.969 { 00:24:25.969 "name": "BaseBdev3", 00:24:25.969 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:25.969 "is_configured": true, 00:24:25.969 "data_offset": 2048, 00:24:25.969 "data_size": 63488 00:24:25.969 }, 00:24:25.969 { 00:24:25.969 "name": "BaseBdev4", 00:24:25.969 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:25.969 "is_configured": true, 00:24:25.969 "data_offset": 2048, 00:24:25.969 "data_size": 63488 00:24:25.969 } 00:24:25.969 ] 00:24:25.969 }' 00:24:25.969 21:20:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:25.969 21:20:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:25.969 21:20:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:26.228 21:20:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:26.228 21:20:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:27.164 21:20:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:27.164 21:20:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:27.164 21:20:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:27.164 21:20:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:27.164 21:20:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:27.164 21:20:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:27.164 21:20:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.164 21:20:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.424 21:20:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:27.424 "name": "raid_bdev1", 00:24:27.424 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:27.424 "strip_size_kb": 64, 00:24:27.424 "state": "online", 00:24:27.424 "raid_level": "raid5f", 00:24:27.424 "superblock": true, 00:24:27.424 "num_base_bdevs": 4, 00:24:27.424 "num_base_bdevs_discovered": 4, 00:24:27.424 "num_base_bdevs_operational": 4, 00:24:27.424 "process": { 00:24:27.424 "type": "rebuild", 00:24:27.424 "target": "spare", 00:24:27.424 "progress": { 00:24:27.424 "blocks": 82560, 00:24:27.424 "percent": 43 00:24:27.424 } 00:24:27.424 }, 00:24:27.424 "base_bdevs_list": [ 00:24:27.424 { 00:24:27.424 "name": "spare", 00:24:27.424 "uuid": "4d78b1ec-ff8d-5ea2-9150-c78c23f59b71", 00:24:27.424 "is_configured": true, 00:24:27.424 "data_offset": 2048, 00:24:27.424 "data_size": 63488 00:24:27.424 }, 00:24:27.424 { 00:24:27.424 "name": "BaseBdev2", 00:24:27.424 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:27.424 "is_configured": true, 00:24:27.424 "data_offset": 2048, 00:24:27.424 "data_size": 63488 00:24:27.424 }, 00:24:27.424 { 00:24:27.424 "name": "BaseBdev3", 00:24:27.424 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:27.424 "is_configured": true, 00:24:27.424 "data_offset": 2048, 00:24:27.424 "data_size": 63488 00:24:27.424 }, 00:24:27.424 { 00:24:27.424 "name": "BaseBdev4", 00:24:27.424 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:27.424 "is_configured": true, 00:24:27.424 "data_offset": 2048, 00:24:27.424 "data_size": 63488 00:24:27.424 } 00:24:27.424 ] 00:24:27.424 }' 00:24:27.424 21:20:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:27.424 21:20:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:27.424 21:20:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:27.424 21:20:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:27.424 21:20:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:28.800 21:20:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:28.800 21:20:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:28.800 21:20:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:28.800 21:20:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:28.800 21:20:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:28.800 21:20:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:28.800 21:20:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.800 21:20:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.800 21:20:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:28.800 "name": "raid_bdev1", 00:24:28.800 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:28.800 "strip_size_kb": 64, 00:24:28.800 "state": "online", 00:24:28.800 "raid_level": "raid5f", 00:24:28.800 "superblock": true, 00:24:28.800 "num_base_bdevs": 4, 00:24:28.800 "num_base_bdevs_discovered": 4, 00:24:28.800 "num_base_bdevs_operational": 4, 00:24:28.800 "process": { 00:24:28.800 "type": "rebuild", 00:24:28.800 "target": "spare", 00:24:28.800 "progress": { 00:24:28.800 "blocks": 109440, 00:24:28.800 "percent": 57 00:24:28.800 } 00:24:28.800 }, 00:24:28.800 "base_bdevs_list": [ 00:24:28.800 { 00:24:28.800 "name": "spare", 00:24:28.800 "uuid": "4d78b1ec-ff8d-5ea2-9150-c78c23f59b71", 00:24:28.800 "is_configured": true, 00:24:28.800 "data_offset": 2048, 00:24:28.800 "data_size": 63488 00:24:28.800 }, 00:24:28.800 { 00:24:28.800 "name": "BaseBdev2", 00:24:28.800 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:28.800 "is_configured": true, 00:24:28.800 "data_offset": 2048, 00:24:28.800 "data_size": 63488 00:24:28.800 }, 00:24:28.800 { 00:24:28.800 "name": "BaseBdev3", 00:24:28.800 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:28.800 "is_configured": true, 00:24:28.800 "data_offset": 2048, 00:24:28.800 "data_size": 63488 00:24:28.800 }, 00:24:28.800 { 00:24:28.800 "name": "BaseBdev4", 00:24:28.800 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:28.800 "is_configured": true, 00:24:28.800 "data_offset": 2048, 00:24:28.800 "data_size": 63488 00:24:28.800 } 00:24:28.800 ] 00:24:28.800 }' 00:24:28.800 21:20:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:28.800 21:20:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:28.800 21:20:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:28.800 21:20:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:28.800 21:20:51 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:29.785 21:20:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:29.785 21:20:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:29.785 21:20:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:29.785 21:20:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:29.785 21:20:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:29.785 21:20:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:29.785 21:20:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.785 21:20:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.044 21:20:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:30.044 "name": "raid_bdev1", 00:24:30.044 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:30.044 "strip_size_kb": 64, 00:24:30.044 "state": "online", 00:24:30.044 "raid_level": "raid5f", 00:24:30.044 "superblock": true, 00:24:30.044 "num_base_bdevs": 4, 00:24:30.044 "num_base_bdevs_discovered": 4, 00:24:30.044 "num_base_bdevs_operational": 4, 00:24:30.044 "process": { 00:24:30.044 "type": "rebuild", 00:24:30.044 "target": "spare", 00:24:30.044 "progress": { 00:24:30.045 "blocks": 134400, 00:24:30.045 "percent": 70 00:24:30.045 } 00:24:30.045 }, 00:24:30.045 "base_bdevs_list": [ 00:24:30.045 { 00:24:30.045 "name": "spare", 00:24:30.045 "uuid": "4d78b1ec-ff8d-5ea2-9150-c78c23f59b71", 00:24:30.045 "is_configured": true, 00:24:30.045 "data_offset": 2048, 00:24:30.045 "data_size": 63488 00:24:30.045 }, 00:24:30.045 { 00:24:30.045 "name": "BaseBdev2", 00:24:30.045 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:30.045 "is_configured": true, 00:24:30.045 "data_offset": 2048, 00:24:30.045 "data_size": 63488 00:24:30.045 }, 00:24:30.045 { 00:24:30.045 "name": "BaseBdev3", 00:24:30.045 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:30.045 "is_configured": true, 00:24:30.045 "data_offset": 2048, 00:24:30.045 "data_size": 63488 00:24:30.045 }, 00:24:30.045 { 00:24:30.045 "name": "BaseBdev4", 00:24:30.045 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:30.045 "is_configured": true, 00:24:30.045 "data_offset": 2048, 00:24:30.045 "data_size": 63488 00:24:30.045 } 00:24:30.045 ] 00:24:30.045 }' 00:24:30.045 21:20:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:30.045 21:20:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:30.045 21:20:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:30.304 21:20:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:30.304 21:20:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:31.240 21:20:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:31.240 21:20:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:31.240 21:20:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:31.240 21:20:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:31.240 21:20:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:31.241 21:20:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:31.241 21:20:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.241 21:20:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.499 21:20:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:31.499 "name": "raid_bdev1", 00:24:31.499 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:31.499 "strip_size_kb": 64, 00:24:31.499 "state": "online", 00:24:31.499 "raid_level": "raid5f", 00:24:31.499 "superblock": true, 00:24:31.499 "num_base_bdevs": 4, 00:24:31.499 "num_base_bdevs_discovered": 4, 00:24:31.499 "num_base_bdevs_operational": 4, 00:24:31.499 "process": { 00:24:31.499 "type": "rebuild", 00:24:31.499 "target": "spare", 00:24:31.499 "progress": { 00:24:31.499 "blocks": 161280, 00:24:31.499 "percent": 84 00:24:31.499 } 00:24:31.499 }, 00:24:31.499 "base_bdevs_list": [ 00:24:31.499 { 00:24:31.499 "name": "spare", 00:24:31.499 "uuid": "4d78b1ec-ff8d-5ea2-9150-c78c23f59b71", 00:24:31.499 "is_configured": true, 00:24:31.499 "data_offset": 2048, 00:24:31.499 "data_size": 63488 00:24:31.499 }, 00:24:31.499 { 00:24:31.499 "name": "BaseBdev2", 00:24:31.499 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:31.499 "is_configured": true, 00:24:31.499 "data_offset": 2048, 00:24:31.499 "data_size": 63488 00:24:31.499 }, 00:24:31.499 { 00:24:31.499 "name": "BaseBdev3", 00:24:31.499 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:31.499 "is_configured": true, 00:24:31.499 "data_offset": 2048, 00:24:31.499 "data_size": 63488 00:24:31.499 }, 00:24:31.499 { 00:24:31.499 "name": "BaseBdev4", 00:24:31.499 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:31.499 "is_configured": true, 00:24:31.499 "data_offset": 2048, 00:24:31.499 "data_size": 63488 00:24:31.499 } 00:24:31.499 ] 00:24:31.499 }' 00:24:31.499 21:20:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:31.499 21:20:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:31.499 21:20:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:31.499 21:20:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:31.499 21:20:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:32.874 21:20:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:32.874 21:20:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:32.874 21:20:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:32.874 21:20:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:32.874 21:20:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:32.874 21:20:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:32.874 21:20:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.874 21:20:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.874 21:20:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:32.874 "name": "raid_bdev1", 00:24:32.874 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:32.874 "strip_size_kb": 64, 00:24:32.874 "state": "online", 00:24:32.874 "raid_level": "raid5f", 00:24:32.874 "superblock": true, 00:24:32.874 "num_base_bdevs": 4, 00:24:32.874 "num_base_bdevs_discovered": 4, 00:24:32.874 "num_base_bdevs_operational": 4, 00:24:32.874 "process": { 00:24:32.874 "type": "rebuild", 00:24:32.874 "target": "spare", 00:24:32.874 "progress": { 00:24:32.874 "blocks": 186240, 00:24:32.874 "percent": 97 00:24:32.874 } 00:24:32.874 }, 00:24:32.874 "base_bdevs_list": [ 00:24:32.874 { 00:24:32.874 "name": "spare", 00:24:32.874 "uuid": "4d78b1ec-ff8d-5ea2-9150-c78c23f59b71", 00:24:32.874 "is_configured": true, 00:24:32.874 "data_offset": 2048, 00:24:32.874 "data_size": 63488 00:24:32.874 }, 00:24:32.874 { 00:24:32.874 "name": "BaseBdev2", 00:24:32.875 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:32.875 "is_configured": true, 00:24:32.875 "data_offset": 2048, 00:24:32.875 "data_size": 63488 00:24:32.875 }, 00:24:32.875 { 00:24:32.875 "name": "BaseBdev3", 00:24:32.875 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:32.875 "is_configured": true, 00:24:32.875 "data_offset": 2048, 00:24:32.875 "data_size": 63488 00:24:32.875 }, 00:24:32.875 { 00:24:32.875 "name": "BaseBdev4", 00:24:32.875 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:32.875 "is_configured": true, 00:24:32.875 "data_offset": 2048, 00:24:32.875 "data_size": 63488 00:24:32.875 } 00:24:32.875 ] 00:24:32.875 }' 00:24:32.875 21:20:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:32.875 21:20:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:32.875 21:20:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:32.875 21:20:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:32.875 21:20:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:33.133 [2024-06-07 21:20:55.576763] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:33.133 [2024-06-07 21:20:55.576846] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:33.133 [2024-06-07 21:20:55.577043] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:34.068 21:20:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:34.068 21:20:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:34.068 21:20:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:34.068 21:20:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:34.068 21:20:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:34.068 21:20:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:34.068 21:20:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.068 21:20:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.326 21:20:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:34.326 "name": "raid_bdev1", 00:24:34.326 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:34.326 "strip_size_kb": 64, 00:24:34.326 "state": "online", 00:24:34.326 "raid_level": "raid5f", 00:24:34.326 "superblock": true, 00:24:34.326 "num_base_bdevs": 4, 00:24:34.326 "num_base_bdevs_discovered": 4, 00:24:34.326 "num_base_bdevs_operational": 4, 00:24:34.326 "base_bdevs_list": [ 00:24:34.326 { 00:24:34.326 "name": "spare", 00:24:34.326 "uuid": "4d78b1ec-ff8d-5ea2-9150-c78c23f59b71", 00:24:34.326 "is_configured": true, 00:24:34.326 "data_offset": 2048, 00:24:34.326 "data_size": 63488 00:24:34.326 }, 00:24:34.326 { 00:24:34.326 "name": "BaseBdev2", 00:24:34.326 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:34.326 "is_configured": true, 00:24:34.326 "data_offset": 2048, 00:24:34.326 "data_size": 63488 00:24:34.326 }, 00:24:34.326 { 00:24:34.326 "name": "BaseBdev3", 00:24:34.326 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:34.326 "is_configured": true, 00:24:34.326 "data_offset": 2048, 00:24:34.326 "data_size": 63488 00:24:34.326 }, 00:24:34.326 { 00:24:34.326 "name": "BaseBdev4", 00:24:34.326 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:34.326 "is_configured": true, 00:24:34.326 "data_offset": 2048, 00:24:34.326 "data_size": 63488 00:24:34.326 } 00:24:34.326 ] 00:24:34.327 }' 00:24:34.327 21:20:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:34.327 21:20:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:34.327 21:20:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:34.327 21:20:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:34.327 21:20:56 -- bdev/bdev_raid.sh@660 -- # break 00:24:34.327 21:20:56 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:34.327 21:20:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:34.327 21:20:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:34.327 21:20:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:34.327 21:20:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:34.327 21:20:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.327 21:20:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:34.585 "name": "raid_bdev1", 00:24:34.585 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:34.585 "strip_size_kb": 64, 00:24:34.585 "state": "online", 00:24:34.585 "raid_level": "raid5f", 00:24:34.585 "superblock": true, 00:24:34.585 "num_base_bdevs": 4, 00:24:34.585 "num_base_bdevs_discovered": 4, 00:24:34.585 "num_base_bdevs_operational": 4, 00:24:34.585 "base_bdevs_list": [ 00:24:34.585 { 00:24:34.585 "name": "spare", 00:24:34.585 "uuid": "4d78b1ec-ff8d-5ea2-9150-c78c23f59b71", 00:24:34.585 "is_configured": true, 00:24:34.585 "data_offset": 2048, 00:24:34.585 "data_size": 63488 00:24:34.585 }, 00:24:34.585 { 00:24:34.585 "name": "BaseBdev2", 00:24:34.585 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:34.585 "is_configured": true, 00:24:34.585 "data_offset": 2048, 00:24:34.585 "data_size": 63488 00:24:34.585 }, 00:24:34.585 { 00:24:34.585 "name": "BaseBdev3", 00:24:34.585 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:34.585 "is_configured": true, 00:24:34.585 "data_offset": 2048, 00:24:34.585 "data_size": 63488 00:24:34.585 }, 00:24:34.585 { 00:24:34.585 "name": "BaseBdev4", 00:24:34.585 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:34.585 "is_configured": true, 00:24:34.585 "data_offset": 2048, 00:24:34.585 "data_size": 63488 00:24:34.585 } 00:24:34.585 ] 00:24:34.585 }' 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.585 21:20:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.844 21:20:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:34.844 "name": "raid_bdev1", 00:24:34.844 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:34.844 "strip_size_kb": 64, 00:24:34.844 "state": "online", 00:24:34.844 "raid_level": "raid5f", 00:24:34.844 "superblock": true, 00:24:34.844 "num_base_bdevs": 4, 00:24:34.844 "num_base_bdevs_discovered": 4, 00:24:34.844 "num_base_bdevs_operational": 4, 00:24:34.844 "base_bdevs_list": [ 00:24:34.844 { 00:24:34.844 "name": "spare", 00:24:34.844 "uuid": "4d78b1ec-ff8d-5ea2-9150-c78c23f59b71", 00:24:34.844 "is_configured": true, 00:24:34.844 "data_offset": 2048, 00:24:34.844 "data_size": 63488 00:24:34.844 }, 00:24:34.844 { 00:24:34.844 "name": "BaseBdev2", 00:24:34.844 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:34.844 "is_configured": true, 00:24:34.844 "data_offset": 2048, 00:24:34.844 "data_size": 63488 00:24:34.844 }, 00:24:34.844 { 00:24:34.844 "name": "BaseBdev3", 00:24:34.844 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:34.844 "is_configured": true, 00:24:34.844 "data_offset": 2048, 00:24:34.844 "data_size": 63488 00:24:34.844 }, 00:24:34.844 { 00:24:34.844 "name": "BaseBdev4", 00:24:34.844 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:34.844 "is_configured": true, 00:24:34.844 "data_offset": 2048, 00:24:34.844 "data_size": 63488 00:24:34.844 } 00:24:34.844 ] 00:24:34.844 }' 00:24:34.844 21:20:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:34.844 21:20:57 -- common/autotest_common.sh@10 -- # set +x 00:24:35.411 21:20:58 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:35.669 [2024-06-07 21:20:58.323927] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:35.669 [2024-06-07 21:20:58.323964] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:35.669 [2024-06-07 21:20:58.324100] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:35.669 [2024-06-07 21:20:58.324221] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:35.669 [2024-06-07 21:20:58.324243] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:24:35.669 21:20:58 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.669 21:20:58 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:35.928 21:20:58 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:35.928 21:20:58 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:35.928 21:20:58 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:35.928 21:20:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:35.928 21:20:58 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:35.928 21:20:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:35.928 21:20:58 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:35.928 21:20:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:35.928 21:20:58 -- bdev/nbd_common.sh@12 -- # local i 00:24:35.928 21:20:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:35.928 21:20:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:35.928 21:20:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:36.187 /dev/nbd0 00:24:36.187 21:20:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:36.187 21:20:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:36.187 21:20:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:36.187 21:20:58 -- common/autotest_common.sh@857 -- # local i 00:24:36.187 21:20:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:36.187 21:20:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:36.187 21:20:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:36.187 21:20:58 -- common/autotest_common.sh@861 -- # break 00:24:36.187 21:20:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:36.187 21:20:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:36.187 21:20:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:36.187 1+0 records in 00:24:36.187 1+0 records out 00:24:36.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003494 s, 11.7 MB/s 00:24:36.187 21:20:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:36.187 21:20:58 -- common/autotest_common.sh@874 -- # size=4096 00:24:36.187 21:20:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:36.187 21:20:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:36.187 21:20:58 -- common/autotest_common.sh@877 -- # return 0 00:24:36.187 21:20:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:36.187 21:20:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:36.187 21:20:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:36.454 /dev/nbd1 00:24:36.454 21:20:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:36.454 21:20:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:36.454 21:20:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:36.454 21:20:59 -- common/autotest_common.sh@857 -- # local i 00:24:36.454 21:20:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:36.454 21:20:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:36.454 21:20:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:36.454 21:20:59 -- common/autotest_common.sh@861 -- # break 00:24:36.454 21:20:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:36.454 21:20:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:36.454 21:20:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:36.454 1+0 records in 00:24:36.454 1+0 records out 00:24:36.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059993 s, 6.8 MB/s 00:24:36.454 21:20:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:36.454 21:20:59 -- common/autotest_common.sh@874 -- # size=4096 00:24:36.454 21:20:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:36.454 21:20:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:36.454 21:20:59 -- common/autotest_common.sh@877 -- # return 0 00:24:36.454 21:20:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:36.454 21:20:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:36.454 21:20:59 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:36.727 21:20:59 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:36.727 21:20:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:36.727 21:20:59 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:36.727 21:20:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:36.727 21:20:59 -- bdev/nbd_common.sh@51 -- # local i 00:24:36.727 21:20:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:36.727 21:20:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:36.727 21:20:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:36.986 21:20:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:36.986 21:20:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:36.986 21:20:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:36.986 21:20:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:36.986 21:20:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:36.986 21:20:59 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:36.986 21:20:59 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:36.986 21:20:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:36.986 21:20:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:36.986 21:20:59 -- bdev/nbd_common.sh@41 -- # break 00:24:36.986 21:20:59 -- bdev/nbd_common.sh@45 -- # return 0 00:24:36.986 21:20:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:36.986 21:20:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:37.245 21:20:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:37.245 21:20:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:37.245 21:20:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:37.245 21:20:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:37.245 21:20:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:37.245 21:20:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:37.245 21:20:59 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:37.245 21:20:59 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:37.245 21:20:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:37.245 21:20:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:37.245 21:20:59 -- bdev/nbd_common.sh@41 -- # break 00:24:37.245 21:20:59 -- bdev/nbd_common.sh@45 -- # return 0 00:24:37.245 21:20:59 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:37.245 21:20:59 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:37.245 21:20:59 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:37.245 21:20:59 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:37.504 21:21:00 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:37.762 [2024-06-07 21:21:00.329279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:37.762 [2024-06-07 21:21:00.329391] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.763 [2024-06-07 21:21:00.329433] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:24:37.763 [2024-06-07 21:21:00.329455] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.763 [2024-06-07 21:21:00.331629] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.763 [2024-06-07 21:21:00.331705] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:37.763 [2024-06-07 21:21:00.331841] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:37.763 [2024-06-07 21:21:00.331894] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:37.763 BaseBdev1 00:24:37.763 21:21:00 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:37.763 21:21:00 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:37.763 21:21:00 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:38.021 21:21:00 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:38.281 [2024-06-07 21:21:00.773478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:38.281 [2024-06-07 21:21:00.773588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.281 [2024-06-07 21:21:00.773635] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:24:38.281 [2024-06-07 21:21:00.773657] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.281 [2024-06-07 21:21:00.774124] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.281 [2024-06-07 21:21:00.774200] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:38.281 [2024-06-07 21:21:00.774322] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:38.281 [2024-06-07 21:21:00.774338] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:38.281 [2024-06-07 21:21:00.774346] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:38.281 [2024-06-07 21:21:00.774381] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:24:38.281 [2024-06-07 21:21:00.774438] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:38.281 BaseBdev2 00:24:38.281 21:21:00 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:38.281 21:21:00 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:38.281 21:21:00 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:38.540 21:21:00 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:38.540 [2024-06-07 21:21:01.181571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:38.540 [2024-06-07 21:21:01.181678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.540 [2024-06-07 21:21:01.181716] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:24:38.540 [2024-06-07 21:21:01.181743] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.540 [2024-06-07 21:21:01.182270] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.540 [2024-06-07 21:21:01.182362] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:38.540 [2024-06-07 21:21:01.182457] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:38.540 [2024-06-07 21:21:01.182485] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:38.540 BaseBdev3 00:24:38.540 21:21:01 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:38.540 21:21:01 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:24:38.540 21:21:01 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:24:38.799 21:21:01 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:39.057 [2024-06-07 21:21:01.613678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:39.057 [2024-06-07 21:21:01.613792] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.057 [2024-06-07 21:21:01.613830] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:24:39.057 [2024-06-07 21:21:01.613858] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.057 [2024-06-07 21:21:01.614356] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.057 [2024-06-07 21:21:01.614417] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:39.057 [2024-06-07 21:21:01.614547] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:24:39.057 [2024-06-07 21:21:01.614577] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:39.057 BaseBdev4 00:24:39.057 21:21:01 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:39.316 21:21:01 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:39.575 [2024-06-07 21:21:02.037803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:39.575 [2024-06-07 21:21:02.037905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.575 [2024-06-07 21:21:02.037945] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:24:39.575 [2024-06-07 21:21:02.037973] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.575 [2024-06-07 21:21:02.038607] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.575 [2024-06-07 21:21:02.038686] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:39.575 [2024-06-07 21:21:02.038817] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:39.575 [2024-06-07 21:21:02.038884] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:39.575 spare 00:24:39.575 21:21:02 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:39.575 21:21:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:39.575 21:21:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:39.575 21:21:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:39.575 21:21:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:39.575 21:21:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:39.575 21:21:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:39.575 21:21:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:39.575 21:21:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:39.575 21:21:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:39.575 21:21:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.575 21:21:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.575 [2024-06-07 21:21:02.139018] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:24:39.575 [2024-06-07 21:21:02.139045] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:39.575 [2024-06-07 21:21:02.139248] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004d7b0 00:24:39.575 [2024-06-07 21:21:02.140230] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:24:39.575 [2024-06-07 21:21:02.140270] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:24:39.575 [2024-06-07 21:21:02.140471] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:39.833 21:21:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:39.833 "name": "raid_bdev1", 00:24:39.833 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:39.833 "strip_size_kb": 64, 00:24:39.833 "state": "online", 00:24:39.833 "raid_level": "raid5f", 00:24:39.833 "superblock": true, 00:24:39.833 "num_base_bdevs": 4, 00:24:39.833 "num_base_bdevs_discovered": 4, 00:24:39.833 "num_base_bdevs_operational": 4, 00:24:39.833 "base_bdevs_list": [ 00:24:39.833 { 00:24:39.833 "name": "spare", 00:24:39.833 "uuid": "4d78b1ec-ff8d-5ea2-9150-c78c23f59b71", 00:24:39.833 "is_configured": true, 00:24:39.833 "data_offset": 2048, 00:24:39.833 "data_size": 63488 00:24:39.833 }, 00:24:39.833 { 00:24:39.833 "name": "BaseBdev2", 00:24:39.833 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:39.833 "is_configured": true, 00:24:39.833 "data_offset": 2048, 00:24:39.833 "data_size": 63488 00:24:39.833 }, 00:24:39.833 { 00:24:39.833 "name": "BaseBdev3", 00:24:39.833 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:39.833 "is_configured": true, 00:24:39.833 "data_offset": 2048, 00:24:39.833 "data_size": 63488 00:24:39.833 }, 00:24:39.833 { 00:24:39.833 "name": "BaseBdev4", 00:24:39.833 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:39.833 "is_configured": true, 00:24:39.833 "data_offset": 2048, 00:24:39.833 "data_size": 63488 00:24:39.833 } 00:24:39.833 ] 00:24:39.833 }' 00:24:39.834 21:21:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:39.834 21:21:02 -- common/autotest_common.sh@10 -- # set +x 00:24:40.400 21:21:02 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:40.400 21:21:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:40.400 21:21:02 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:40.400 21:21:02 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:40.400 21:21:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:40.400 21:21:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.400 21:21:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.658 21:21:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:40.658 "name": "raid_bdev1", 00:24:40.658 "uuid": "65c348ed-b6a4-473a-a5f7-2f392461931b", 00:24:40.658 "strip_size_kb": 64, 00:24:40.658 "state": "online", 00:24:40.658 "raid_level": "raid5f", 00:24:40.658 "superblock": true, 00:24:40.658 "num_base_bdevs": 4, 00:24:40.658 "num_base_bdevs_discovered": 4, 00:24:40.658 "num_base_bdevs_operational": 4, 00:24:40.658 "base_bdevs_list": [ 00:24:40.658 { 00:24:40.658 "name": "spare", 00:24:40.658 "uuid": "4d78b1ec-ff8d-5ea2-9150-c78c23f59b71", 00:24:40.658 "is_configured": true, 00:24:40.658 "data_offset": 2048, 00:24:40.658 "data_size": 63488 00:24:40.658 }, 00:24:40.658 { 00:24:40.658 "name": "BaseBdev2", 00:24:40.658 "uuid": "4db7057a-813d-5152-9bf4-7d25c0296463", 00:24:40.658 "is_configured": true, 00:24:40.658 "data_offset": 2048, 00:24:40.658 "data_size": 63488 00:24:40.658 }, 00:24:40.658 { 00:24:40.658 "name": "BaseBdev3", 00:24:40.658 "uuid": "f3666e1d-40f6-5021-bd11-b6d0f0a927fa", 00:24:40.658 "is_configured": true, 00:24:40.658 "data_offset": 2048, 00:24:40.658 "data_size": 63488 00:24:40.658 }, 00:24:40.658 { 00:24:40.658 "name": "BaseBdev4", 00:24:40.658 "uuid": "55f68438-e9b2-5085-89d4-f33c73c7b168", 00:24:40.658 "is_configured": true, 00:24:40.658 "data_offset": 2048, 00:24:40.658 "data_size": 63488 00:24:40.658 } 00:24:40.658 ] 00:24:40.658 }' 00:24:40.658 21:21:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:40.658 21:21:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:40.658 21:21:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:40.658 21:21:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:40.658 21:21:03 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.658 21:21:03 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:40.916 21:21:03 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:40.916 21:21:03 -- bdev/bdev_raid.sh@709 -- # killprocess 145675 00:24:40.916 21:21:03 -- common/autotest_common.sh@926 -- # '[' -z 145675 ']' 00:24:40.916 21:21:03 -- common/autotest_common.sh@930 -- # kill -0 145675 00:24:40.916 21:21:03 -- common/autotest_common.sh@931 -- # uname 00:24:40.916 21:21:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:41.173 21:21:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145675 00:24:41.173 killing process with pid 145675 00:24:41.173 Received shutdown signal, test time was about 60.000000 seconds 00:24:41.173 00:24:41.173 Latency(us) 00:24:41.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.173 =================================================================================================================== 00:24:41.173 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:41.173 21:21:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:41.173 21:21:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:41.173 21:21:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145675' 00:24:41.173 21:21:03 -- common/autotest_common.sh@945 -- # kill 145675 00:24:41.173 21:21:03 -- common/autotest_common.sh@950 -- # wait 145675 00:24:41.173 [2024-06-07 21:21:03.607653] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:41.173 [2024-06-07 21:21:03.607847] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:41.173 [2024-06-07 21:21:03.607985] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:41.173 [2024-06-07 21:21:03.608006] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:24:41.173 [2024-06-07 21:21:03.655852] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:41.431 ************************************ 00:24:41.431 END TEST raid5f_rebuild_test_sb 00:24:41.431 ************************************ 00:24:41.431 21:21:03 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:41.431 00:24:41.431 real 0m29.130s 00:24:41.431 user 0m45.465s 00:24:41.431 sys 0m3.290s 00:24:41.431 21:21:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:41.431 21:21:03 -- common/autotest_common.sh@10 -- # set +x 00:24:41.431 21:21:03 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:24:41.431 00:24:41.431 real 11m33.726s 00:24:41.431 user 19m54.538s 00:24:41.431 sys 1m27.266s 00:24:41.431 ************************************ 00:24:41.431 END TEST bdev_raid 00:24:41.431 ************************************ 00:24:41.431 21:21:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:41.431 21:21:03 -- common/autotest_common.sh@10 -- # set +x 00:24:41.431 21:21:03 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:24:41.431 21:21:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:41.431 21:21:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:41.431 21:21:03 -- common/autotest_common.sh@10 -- # set +x 00:24:41.431 ************************************ 00:24:41.431 START TEST bdevperf_config 00:24:41.431 ************************************ 00:24:41.431 21:21:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:24:41.431 * Looking for test storage... 00:24:41.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:24:41.431 21:21:04 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:24:41.431 21:21:04 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:24:41.431 21:21:04 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:24:41.431 21:21:04 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:41.431 21:21:04 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:41.431 21:21:04 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:24:41.431 21:21:04 -- bdevperf/common.sh@8 -- # local job_section=global 00:24:41.432 21:21:04 -- bdevperf/common.sh@9 -- # local rw=read 00:24:41.432 21:21:04 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:41.432 21:21:04 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:24:41.432 21:21:04 -- bdevperf/common.sh@13 -- # cat 00:24:41.432 21:21:04 -- bdevperf/common.sh@18 -- # job='[global]' 00:24:41.432 00:24:41.432 21:21:04 -- bdevperf/common.sh@19 -- # echo 00:24:41.432 21:21:04 -- bdevperf/common.sh@20 -- # cat 00:24:41.432 21:21:04 -- bdevperf/test_config.sh@18 -- # create_job job0 00:24:41.432 21:21:04 -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:41.432 21:21:04 -- bdevperf/common.sh@9 -- # local rw= 00:24:41.432 21:21:04 -- bdevperf/common.sh@10 -- # local filename= 00:24:41.432 21:21:04 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:41.432 00:24:41.432 21:21:04 -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:41.432 21:21:04 -- bdevperf/common.sh@19 -- # echo 00:24:41.432 21:21:04 -- bdevperf/common.sh@20 -- # cat 00:24:41.432 21:21:04 -- bdevperf/test_config.sh@19 -- # create_job job1 00:24:41.432 21:21:04 -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:41.432 21:21:04 -- bdevperf/common.sh@9 -- # local rw= 00:24:41.432 21:21:04 -- bdevperf/common.sh@10 -- # local filename= 00:24:41.432 00:24:41.432 21:21:04 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:41.432 21:21:04 -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:41.432 21:21:04 -- bdevperf/common.sh@19 -- # echo 00:24:41.432 21:21:04 -- bdevperf/common.sh@20 -- # cat 00:24:41.432 21:21:04 -- bdevperf/test_config.sh@20 -- # create_job job2 00:24:41.432 21:21:04 -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:41.432 21:21:04 -- bdevperf/common.sh@9 -- # local rw= 00:24:41.432 21:21:04 -- bdevperf/common.sh@10 -- # local filename= 00:24:41.432 00:24:41.432 21:21:04 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:41.432 21:21:04 -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:41.432 21:21:04 -- bdevperf/common.sh@19 -- # echo 00:24:41.432 21:21:04 -- bdevperf/common.sh@20 -- # cat 00:24:41.690 21:21:04 -- bdevperf/test_config.sh@21 -- # create_job job3 00:24:41.690 21:21:04 -- bdevperf/common.sh@8 -- # local job_section=job3 00:24:41.690 21:21:04 -- bdevperf/common.sh@9 -- # local rw= 00:24:41.690 21:21:04 -- bdevperf/common.sh@10 -- # local filename= 00:24:41.690 00:24:41.690 21:21:04 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:24:41.690 21:21:04 -- bdevperf/common.sh@18 -- # job='[job3]' 00:24:41.690 21:21:04 -- bdevperf/common.sh@19 -- # echo 00:24:41.690 21:21:04 -- bdevperf/common.sh@20 -- # cat 00:24:41.690 21:21:04 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:44.989 21:21:06 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-06-07 21:21:04.159894] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:44.989 [2024-06-07 21:21:04.160112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146486 ] 00:24:44.989 Using job config with 4 jobs 00:24:44.989 [2024-06-07 21:21:04.333739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.989 [2024-06-07 21:21:04.426615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.989 cpumask for '\''job0'\'' is too big 00:24:44.989 cpumask for '\''job1'\'' is too big 00:24:44.989 cpumask for '\''job2'\'' is too big 00:24:44.989 cpumask for '\''job3'\'' is too big 00:24:44.989 Running I/O for 2 seconds... 00:24:44.989 00:24:44.989 Latency(us) 00:24:44.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.989 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.989 Malloc0 : 2.02 28693.66 28.02 0.00 0.00 8913.49 1817.13 14239.19 00:24:44.989 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.989 Malloc0 : 2.02 28671.80 28.00 0.00 0.00 8901.82 1750.11 12690.15 00:24:44.989 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.989 Malloc0 : 2.02 28651.60 27.98 0.00 0.00 8891.34 1742.66 11081.54 00:24:44.989 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.989 Malloc0 : 2.02 28630.32 27.96 0.00 0.00 8880.69 1742.66 11141.12 00:24:44.989 =================================================================================================================== 00:24:44.989 Total : 114647.38 111.96 0.00 0.00 8896.84 1742.66 14239.19' 00:24:44.989 21:21:06 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-06-07 21:21:04.159894] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:44.989 [2024-06-07 21:21:04.160112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146486 ] 00:24:44.989 Using job config with 4 jobs 00:24:44.989 [2024-06-07 21:21:04.333739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.989 [2024-06-07 21:21:04.426615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.989 cpumask for '\''job0'\'' is too big 00:24:44.989 cpumask for '\''job1'\'' is too big 00:24:44.989 cpumask for '\''job2'\'' is too big 00:24:44.989 cpumask for '\''job3'\'' is too big 00:24:44.989 Running I/O for 2 seconds... 00:24:44.989 00:24:44.989 Latency(us) 00:24:44.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.989 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.989 Malloc0 : 2.02 28693.66 28.02 0.00 0.00 8913.49 1817.13 14239.19 00:24:44.989 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.989 Malloc0 : 2.02 28671.80 28.00 0.00 0.00 8901.82 1750.11 12690.15 00:24:44.989 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.989 Malloc0 : 2.02 28651.60 27.98 0.00 0.00 8891.34 1742.66 11081.54 00:24:44.989 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.989 Malloc0 : 2.02 28630.32 27.96 0.00 0.00 8880.69 1742.66 11141.12 00:24:44.989 =================================================================================================================== 00:24:44.989 Total : 114647.38 111.96 0.00 0.00 8896.84 1742.66 14239.19' 00:24:44.989 21:21:06 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:44.989 21:21:06 -- bdevperf/common.sh@32 -- # echo '[2024-06-07 21:21:04.159894] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:44.989 [2024-06-07 21:21:04.160112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146486 ] 00:24:44.989 Using job config with 4 jobs 00:24:44.989 [2024-06-07 21:21:04.333739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.989 [2024-06-07 21:21:04.426615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.990 cpumask for '\''job0'\'' is too big 00:24:44.990 cpumask for '\''job1'\'' is too big 00:24:44.990 cpumask for '\''job2'\'' is too big 00:24:44.990 cpumask for '\''job3'\'' is too big 00:24:44.990 Running I/O for 2 seconds... 00:24:44.990 00:24:44.990 Latency(us) 00:24:44.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.990 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.990 Malloc0 : 2.02 28693.66 28.02 0.00 0.00 8913.49 1817.13 14239.19 00:24:44.990 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.990 Malloc0 : 2.02 28671.80 28.00 0.00 0.00 8901.82 1750.11 12690.15 00:24:44.990 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.990 Malloc0 : 2.02 28651.60 27.98 0.00 0.00 8891.34 1742.66 11081.54 00:24:44.990 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:44.990 Malloc0 : 2.02 28630.32 27.96 0.00 0.00 8880.69 1742.66 11141.12 00:24:44.990 =================================================================================================================== 00:24:44.990 Total : 114647.38 111.96 0.00 0.00 8896.84 1742.66 14239.19' 00:24:44.990 21:21:06 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:44.990 21:21:06 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:24:44.990 21:21:06 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:44.990 [2024-06-07 21:21:06.974832] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:44.990 [2024-06-07 21:21:06.975090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146536 ] 00:24:44.990 [2024-06-07 21:21:07.143210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.990 [2024-06-07 21:21:07.237676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.990 cpumask for 'job0' is too big 00:24:44.990 cpumask for 'job1' is too big 00:24:44.990 cpumask for 'job2' is too big 00:24:44.990 cpumask for 'job3' is too big 00:24:47.538 21:21:09 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:24:47.538 Running I/O for 2 seconds... 00:24:47.538 00:24:47.538 Latency(us) 00:24:47.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.538 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:47.538 Malloc0 : 2.01 28585.96 27.92 0.00 0.00 8946.94 1846.92 15073.28 00:24:47.538 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:47.538 Malloc0 : 2.02 28560.58 27.89 0.00 0.00 8935.52 1772.45 13285.93 00:24:47.538 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:47.538 Malloc0 : 2.02 28535.50 27.87 0.00 0.00 8925.57 1824.58 11558.17 00:24:47.538 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:47.538 Malloc0 : 2.02 28600.98 27.93 0.00 0.00 8885.87 930.91 11081.54 00:24:47.538 =================================================================================================================== 00:24:47.538 Total : 114283.02 111.60 0.00 0.00 8923.43 930.91 15073.28' 00:24:47.538 21:21:09 -- bdevperf/test_config.sh@27 -- # cleanup 00:24:47.538 21:21:09 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:47.538 21:21:09 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:24:47.538 21:21:09 -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:47.538 21:21:09 -- bdevperf/common.sh@9 -- # local rw=write 00:24:47.538 21:21:09 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:47.538 00:24:47.538 21:21:09 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:47.538 21:21:09 -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:47.538 21:21:09 -- bdevperf/common.sh@19 -- # echo 00:24:47.538 21:21:09 -- bdevperf/common.sh@20 -- # cat 00:24:47.538 21:21:09 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:24:47.538 21:21:09 -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:47.538 21:21:09 -- bdevperf/common.sh@9 -- # local rw=write 00:24:47.538 21:21:09 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:47.538 21:21:09 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:47.538 00:24:47.538 21:21:09 -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:47.538 21:21:09 -- bdevperf/common.sh@19 -- # echo 00:24:47.538 21:21:09 -- bdevperf/common.sh@20 -- # cat 00:24:47.538 21:21:09 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:24:47.538 21:21:09 -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:47.538 21:21:09 -- bdevperf/common.sh@9 -- # local rw=write 00:24:47.538 21:21:09 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:47.538 21:21:09 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:47.538 00:24:47.538 21:21:09 -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:47.538 21:21:09 -- bdevperf/common.sh@19 -- # echo 00:24:47.538 21:21:09 -- bdevperf/common.sh@20 -- # cat 00:24:47.538 21:21:09 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:50.073 21:21:12 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-06-07 21:21:09.793179] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:50.073 [2024-06-07 21:21:09.793408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146573 ] 00:24:50.073 Using job config with 3 jobs 00:24:50.073 [2024-06-07 21:21:09.944363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.073 [2024-06-07 21:21:10.038139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.073 cpumask for '\''job0'\'' is too big 00:24:50.073 cpumask for '\''job1'\'' is too big 00:24:50.073 cpumask for '\''job2'\'' is too big 00:24:50.073 Running I/O for 2 seconds... 00:24:50.073 00:24:50.073 Latency(us) 00:24:50.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.073 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:50.073 Malloc0 : 2.01 39134.06 38.22 0.00 0.00 6534.82 1474.56 8817.57 00:24:50.073 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:50.073 Malloc0 : 2.01 39135.40 38.22 0.00 0.00 6523.41 1489.45 7804.74 00:24:50.073 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:50.073 Malloc0 : 2.02 39107.13 38.19 0.00 0.00 6517.31 1392.64 7685.59 00:24:50.073 =================================================================================================================== 00:24:50.073 Total : 117376.59 114.63 0.00 0.00 6525.17 1392.64 8817.57' 00:24:50.073 21:21:12 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-06-07 21:21:09.793179] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:50.073 [2024-06-07 21:21:09.793408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146573 ] 00:24:50.073 Using job config with 3 jobs 00:24:50.073 [2024-06-07 21:21:09.944363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.073 [2024-06-07 21:21:10.038139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.073 cpumask for '\''job0'\'' is too big 00:24:50.073 cpumask for '\''job1'\'' is too big 00:24:50.073 cpumask for '\''job2'\'' is too big 00:24:50.073 Running I/O for 2 seconds... 00:24:50.073 00:24:50.073 Latency(us) 00:24:50.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.073 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:50.073 Malloc0 : 2.01 39134.06 38.22 0.00 0.00 6534.82 1474.56 8817.57 00:24:50.073 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:50.073 Malloc0 : 2.01 39135.40 38.22 0.00 0.00 6523.41 1489.45 7804.74 00:24:50.073 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:50.073 Malloc0 : 2.02 39107.13 38.19 0.00 0.00 6517.31 1392.64 7685.59 00:24:50.073 =================================================================================================================== 00:24:50.073 Total : 117376.59 114.63 0.00 0.00 6525.17 1392.64 8817.57' 00:24:50.073 21:21:12 -- bdevperf/common.sh@32 -- # echo '[2024-06-07 21:21:09.793179] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:50.073 [2024-06-07 21:21:09.793408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146573 ] 00:24:50.073 Using job config with 3 jobs 00:24:50.073 [2024-06-07 21:21:09.944363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.073 [2024-06-07 21:21:10.038139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.073 cpumask for '\''job0'\'' is too big 00:24:50.073 cpumask for '\''job1'\'' is too big 00:24:50.073 cpumask for '\''job2'\'' is too big 00:24:50.073 Running I/O for 2 seconds... 00:24:50.073 00:24:50.073 Latency(us) 00:24:50.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.073 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:50.073 Malloc0 : 2.01 39134.06 38.22 0.00 0.00 6534.82 1474.56 8817.57 00:24:50.073 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:50.073 Malloc0 : 2.01 39135.40 38.22 0.00 0.00 6523.41 1489.45 7804.74 00:24:50.073 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:50.073 Malloc0 : 2.02 39107.13 38.19 0.00 0.00 6517.31 1392.64 7685.59 00:24:50.073 =================================================================================================================== 00:24:50.073 Total : 117376.59 114.63 0.00 0.00 6525.17 1392.64 8817.57' 00:24:50.073 21:21:12 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:50.073 21:21:12 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:50.073 21:21:12 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:24:50.073 21:21:12 -- bdevperf/test_config.sh@35 -- # cleanup 00:24:50.073 21:21:12 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:50.073 21:21:12 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:24:50.073 21:21:12 -- bdevperf/common.sh@8 -- # local job_section=global 00:24:50.073 21:21:12 -- bdevperf/common.sh@9 -- # local rw=rw 00:24:50.073 21:21:12 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:24:50.073 21:21:12 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:24:50.073 21:21:12 -- bdevperf/common.sh@13 -- # cat 00:24:50.073 21:21:12 -- bdevperf/common.sh@18 -- # job='[global]' 00:24:50.073 21:21:12 -- bdevperf/common.sh@19 -- # echo 00:24:50.073 00:24:50.073 21:21:12 -- bdevperf/common.sh@20 -- # cat 00:24:50.073 21:21:12 -- bdevperf/test_config.sh@38 -- # create_job job0 00:24:50.073 21:21:12 -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:50.073 21:21:12 -- bdevperf/common.sh@9 -- # local rw= 00:24:50.073 21:21:12 -- bdevperf/common.sh@10 -- # local filename= 00:24:50.073 21:21:12 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:50.073 21:21:12 -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:50.073 00:24:50.073 21:21:12 -- bdevperf/common.sh@19 -- # echo 00:24:50.073 21:21:12 -- bdevperf/common.sh@20 -- # cat 00:24:50.073 21:21:12 -- bdevperf/test_config.sh@39 -- # create_job job1 00:24:50.073 21:21:12 -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:50.073 21:21:12 -- bdevperf/common.sh@9 -- # local rw= 00:24:50.073 00:24:50.073 21:21:12 -- bdevperf/common.sh@10 -- # local filename= 00:24:50.073 21:21:12 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:50.073 21:21:12 -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:50.073 21:21:12 -- bdevperf/common.sh@19 -- # echo 00:24:50.073 21:21:12 -- bdevperf/common.sh@20 -- # cat 00:24:50.073 21:21:12 -- bdevperf/test_config.sh@40 -- # create_job job2 00:24:50.073 21:21:12 -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:50.073 21:21:12 -- bdevperf/common.sh@9 -- # local rw= 00:24:50.073 00:24:50.073 21:21:12 -- bdevperf/common.sh@10 -- # local filename= 00:24:50.073 21:21:12 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:50.073 21:21:12 -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:50.073 21:21:12 -- bdevperf/common.sh@19 -- # echo 00:24:50.073 21:21:12 -- bdevperf/common.sh@20 -- # cat 00:24:50.073 21:21:12 -- bdevperf/test_config.sh@41 -- # create_job job3 00:24:50.073 21:21:12 -- bdevperf/common.sh@8 -- # local job_section=job3 00:24:50.073 21:21:12 -- bdevperf/common.sh@9 -- # local rw= 00:24:50.073 21:21:12 -- bdevperf/common.sh@10 -- # local filename= 00:24:50.073 21:21:12 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:24:50.073 21:21:12 -- bdevperf/common.sh@18 -- # job='[job3]' 00:24:50.073 00:24:50.073 21:21:12 -- bdevperf/common.sh@19 -- # echo 00:24:50.073 21:21:12 -- bdevperf/common.sh@20 -- # cat 00:24:50.073 21:21:12 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:53.367 21:21:15 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-06-07 21:21:12.573873] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:53.367 [2024-06-07 21:21:12.574095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146619 ] 00:24:53.368 Using job config with 4 jobs 00:24:53.368 [2024-06-07 21:21:12.741434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.368 [2024-06-07 21:21:12.838423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.368 cpumask for '\''job0'\'' is too big 00:24:53.368 cpumask for '\''job1'\'' is too big 00:24:53.368 cpumask for '\''job2'\'' is too big 00:24:53.368 cpumask for '\''job3'\'' is too big 00:24:53.368 Running I/O for 2 seconds... 00:24:53.368 00:24:53.368 Latency(us) 00:24:53.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.368 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc0 : 2.03 14623.42 14.28 0.00 0.00 17490.21 4200.26 31695.59 00:24:53.368 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc1 : 2.03 14612.49 14.27 0.00 0.00 17485.41 5093.93 31457.28 00:24:53.368 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc0 : 2.03 14601.91 14.26 0.00 0.00 17433.55 3589.59 28120.90 00:24:53.368 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc1 : 2.04 14591.25 14.25 0.00 0.00 17429.57 3991.74 28120.90 00:24:53.368 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc0 : 2.04 14580.81 14.24 0.00 0.00 17387.54 3693.85 24307.90 00:24:53.368 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc1 : 2.04 14570.44 14.23 0.00 0.00 17384.83 4557.73 24188.74 00:24:53.368 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc0 : 2.04 14653.85 14.31 0.00 0.00 17223.30 2919.33 20733.21 00:24:53.368 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc1 : 2.05 14643.25 14.30 0.00 0.00 17219.06 2263.97 20852.36 00:24:53.368 =================================================================================================================== 00:24:53.368 Total : 116877.43 114.14 0.00 0.00 17381.34 2263.97 31695.59' 00:24:53.368 21:21:15 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-06-07 21:21:12.573873] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:53.368 [2024-06-07 21:21:12.574095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146619 ] 00:24:53.368 Using job config with 4 jobs 00:24:53.368 [2024-06-07 21:21:12.741434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.368 [2024-06-07 21:21:12.838423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.368 cpumask for '\''job0'\'' is too big 00:24:53.368 cpumask for '\''job1'\'' is too big 00:24:53.368 cpumask for '\''job2'\'' is too big 00:24:53.368 cpumask for '\''job3'\'' is too big 00:24:53.368 Running I/O for 2 seconds... 00:24:53.368 00:24:53.368 Latency(us) 00:24:53.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.368 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc0 : 2.03 14623.42 14.28 0.00 0.00 17490.21 4200.26 31695.59 00:24:53.368 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc1 : 2.03 14612.49 14.27 0.00 0.00 17485.41 5093.93 31457.28 00:24:53.368 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc0 : 2.03 14601.91 14.26 0.00 0.00 17433.55 3589.59 28120.90 00:24:53.368 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc1 : 2.04 14591.25 14.25 0.00 0.00 17429.57 3991.74 28120.90 00:24:53.368 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc0 : 2.04 14580.81 14.24 0.00 0.00 17387.54 3693.85 24307.90 00:24:53.368 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc1 : 2.04 14570.44 14.23 0.00 0.00 17384.83 4557.73 24188.74 00:24:53.368 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc0 : 2.04 14653.85 14.31 0.00 0.00 17223.30 2919.33 20733.21 00:24:53.368 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc1 : 2.05 14643.25 14.30 0.00 0.00 17219.06 2263.97 20852.36 00:24:53.368 =================================================================================================================== 00:24:53.368 Total : 116877.43 114.14 0.00 0.00 17381.34 2263.97 31695.59' 00:24:53.368 21:21:15 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:53.368 21:21:15 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:53.368 21:21:15 -- bdevperf/common.sh@32 -- # echo '[2024-06-07 21:21:12.573873] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:53.368 [2024-06-07 21:21:12.574095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146619 ] 00:24:53.368 Using job config with 4 jobs 00:24:53.368 [2024-06-07 21:21:12.741434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.368 [2024-06-07 21:21:12.838423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.368 cpumask for '\''job0'\'' is too big 00:24:53.368 cpumask for '\''job1'\'' is too big 00:24:53.368 cpumask for '\''job2'\'' is too big 00:24:53.368 cpumask for '\''job3'\'' is too big 00:24:53.368 Running I/O for 2 seconds... 00:24:53.368 00:24:53.368 Latency(us) 00:24:53.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.368 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc0 : 2.03 14623.42 14.28 0.00 0.00 17490.21 4200.26 31695.59 00:24:53.368 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc1 : 2.03 14612.49 14.27 0.00 0.00 17485.41 5093.93 31457.28 00:24:53.368 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc0 : 2.03 14601.91 14.26 0.00 0.00 17433.55 3589.59 28120.90 00:24:53.368 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc1 : 2.04 14591.25 14.25 0.00 0.00 17429.57 3991.74 28120.90 00:24:53.368 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc0 : 2.04 14580.81 14.24 0.00 0.00 17387.54 3693.85 24307.90 00:24:53.368 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc1 : 2.04 14570.44 14.23 0.00 0.00 17384.83 4557.73 24188.74 00:24:53.368 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc0 : 2.04 14653.85 14.31 0.00 0.00 17223.30 2919.33 20733.21 00:24:53.368 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:53.368 Malloc1 : 2.05 14643.25 14.30 0.00 0.00 17219.06 2263.97 20852.36 00:24:53.368 =================================================================================================================== 00:24:53.368 Total : 116877.43 114.14 0.00 0.00 17381.34 2263.97 31695.59' 00:24:53.368 21:21:15 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:24:53.368 21:21:15 -- bdevperf/test_config.sh@44 -- # cleanup 00:24:53.368 21:21:15 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:53.368 21:21:15 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:53.368 00:24:53.368 real 0m11.340s 00:24:53.368 user 0m9.738s 00:24:53.368 sys 0m1.052s 00:24:53.368 21:21:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.368 21:21:15 -- common/autotest_common.sh@10 -- # set +x 00:24:53.368 ************************************ 00:24:53.368 END TEST bdevperf_config 00:24:53.368 ************************************ 00:24:53.368 21:21:15 -- spdk/autotest.sh@198 -- # uname -s 00:24:53.368 21:21:15 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:24:53.368 21:21:15 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:53.368 21:21:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:53.368 21:21:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:53.368 21:21:15 -- common/autotest_common.sh@10 -- # set +x 00:24:53.368 ************************************ 00:24:53.368 START TEST reactor_set_interrupt 00:24:53.368 ************************************ 00:24:53.368 21:21:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:53.368 * Looking for test storage... 00:24:53.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:53.368 21:21:15 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:24:53.368 21:21:15 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:53.368 21:21:15 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:53.368 21:21:15 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:53.369 21:21:15 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:24:53.369 21:21:15 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:53.369 21:21:15 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:24:53.369 21:21:15 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:53.369 21:21:15 -- common/autotest_common.sh@34 -- # set -e 00:24:53.369 21:21:15 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:53.369 21:21:15 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:53.369 21:21:15 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:24:53.369 21:21:15 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:24:53.369 21:21:15 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:53.369 21:21:15 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:24:53.369 21:21:15 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:24:53.369 21:21:15 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:24:53.369 21:21:15 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:24:53.369 21:21:15 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:24:53.369 21:21:15 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:24:53.369 21:21:15 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:24:53.369 21:21:15 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:24:53.369 21:21:15 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:24:53.369 21:21:15 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:24:53.369 21:21:15 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:24:53.369 21:21:15 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:24:53.369 21:21:15 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:24:53.369 21:21:15 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:24:53.369 21:21:15 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:24:53.369 21:21:15 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:24:53.369 21:21:15 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:24:53.369 21:21:15 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:24:53.369 21:21:15 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:24:53.369 21:21:15 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:24:53.369 21:21:15 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:24:53.369 21:21:15 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:24:53.369 21:21:15 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:24:53.369 21:21:15 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:24:53.369 21:21:15 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:24:53.369 21:21:15 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:24:53.369 21:21:15 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:53.369 21:21:15 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:24:53.369 21:21:15 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:24:53.369 21:21:15 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:24:53.369 21:21:15 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:24:53.369 21:21:15 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:24:53.369 21:21:15 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:24:53.369 21:21:15 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:24:53.369 21:21:15 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:24:53.369 21:21:15 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:53.369 21:21:15 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:24:53.369 21:21:15 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:24:53.369 21:21:15 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:24:53.369 21:21:15 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:24:53.369 21:21:15 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:24:53.369 21:21:15 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:24:53.369 21:21:15 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:24:53.369 21:21:15 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:53.369 21:21:15 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:24:53.369 21:21:15 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:24:53.369 21:21:15 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:24:53.369 21:21:15 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:24:53.369 21:21:15 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:24:53.369 21:21:15 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:53.369 21:21:15 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:24:53.369 21:21:15 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:24:53.369 21:21:15 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:24:53.369 21:21:15 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:24:53.369 21:21:15 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:24:53.369 21:21:15 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:24:53.369 21:21:15 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:24:53.369 21:21:15 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:53.369 21:21:15 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:24:53.369 21:21:15 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:24:53.369 21:21:15 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:24:53.369 21:21:15 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:24:53.369 21:21:15 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:24:53.369 21:21:15 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:24:53.369 21:21:15 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:24:53.369 21:21:15 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:24:53.369 21:21:15 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:53.369 21:21:15 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:24:53.369 21:21:15 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:24:53.369 21:21:15 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:24:53.369 21:21:15 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:24:53.369 21:21:15 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:24:53.369 21:21:15 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:53.369 21:21:15 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:24:53.369 21:21:15 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:24:53.369 21:21:15 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:24:53.369 21:21:15 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:24:53.369 21:21:15 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:24:53.369 21:21:15 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:53.369 21:21:15 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:53.369 21:21:15 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:24:53.369 21:21:15 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:24:53.369 21:21:15 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:24:53.369 21:21:15 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:24:53.369 21:21:15 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:24:53.369 21:21:15 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:24:53.369 21:21:15 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:53.369 21:21:15 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:53.369 21:21:15 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:53.369 21:21:15 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:53.369 21:21:15 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:53.369 21:21:15 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:53.369 21:21:15 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:24:53.369 21:21:15 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:53.369 #define SPDK_CONFIG_H 00:24:53.369 #define SPDK_CONFIG_APPS 1 00:24:53.369 #define SPDK_CONFIG_ARCH native 00:24:53.369 #define SPDK_CONFIG_ASAN 1 00:24:53.369 #undef SPDK_CONFIG_AVAHI 00:24:53.369 #undef SPDK_CONFIG_CET 00:24:53.369 #define SPDK_CONFIG_COVERAGE 1 00:24:53.369 #define SPDK_CONFIG_CROSS_PREFIX 00:24:53.369 #undef SPDK_CONFIG_CRYPTO 00:24:53.369 #undef SPDK_CONFIG_CRYPTO_MLX5 00:24:53.369 #undef SPDK_CONFIG_CUSTOMOCF 00:24:53.369 #undef SPDK_CONFIG_DAOS 00:24:53.369 #define SPDK_CONFIG_DAOS_DIR 00:24:53.369 #define SPDK_CONFIG_DEBUG 1 00:24:53.369 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:24:53.369 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:24:53.369 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:24:53.369 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:24:53.369 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:53.369 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:53.369 #define SPDK_CONFIG_EXAMPLES 1 00:24:53.369 #undef SPDK_CONFIG_FC 00:24:53.369 #define SPDK_CONFIG_FC_PATH 00:24:53.369 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:53.369 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:53.369 #undef SPDK_CONFIG_FUSE 00:24:53.369 #undef SPDK_CONFIG_FUZZER 00:24:53.369 #define SPDK_CONFIG_FUZZER_LIB 00:24:53.369 #undef SPDK_CONFIG_GOLANG 00:24:53.369 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:24:53.369 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:53.369 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:53.369 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:53.369 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:53.369 #define SPDK_CONFIG_IDXD 1 00:24:53.369 #undef SPDK_CONFIG_IDXD_KERNEL 00:24:53.369 #undef SPDK_CONFIG_IPSEC_MB 00:24:53.369 #define SPDK_CONFIG_IPSEC_MB_DIR 00:24:53.369 #define SPDK_CONFIG_ISAL 1 00:24:53.369 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:53.369 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:53.369 #define SPDK_CONFIG_LIBDIR 00:24:53.369 #undef SPDK_CONFIG_LTO 00:24:53.369 #define SPDK_CONFIG_MAX_LCORES 00:24:53.369 #define SPDK_CONFIG_NVME_CUSE 1 00:24:53.369 #undef SPDK_CONFIG_OCF 00:24:53.369 #define SPDK_CONFIG_OCF_PATH 00:24:53.369 #define SPDK_CONFIG_OPENSSL_PATH 00:24:53.369 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:53.369 #undef SPDK_CONFIG_PGO_USE 00:24:53.369 #define SPDK_CONFIG_PREFIX /usr/local 00:24:53.369 #define SPDK_CONFIG_RAID5F 1 00:24:53.369 #undef SPDK_CONFIG_RBD 00:24:53.369 #define SPDK_CONFIG_RDMA 1 00:24:53.369 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:53.369 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:53.369 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:53.369 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:53.370 #undef SPDK_CONFIG_SHARED 00:24:53.370 #undef SPDK_CONFIG_SMA 00:24:53.370 #define SPDK_CONFIG_TESTS 1 00:24:53.370 #undef SPDK_CONFIG_TSAN 00:24:53.370 #undef SPDK_CONFIG_UBLK 00:24:53.370 #define SPDK_CONFIG_UBSAN 1 00:24:53.370 #define SPDK_CONFIG_UNIT_TESTS 1 00:24:53.370 #undef SPDK_CONFIG_URING 00:24:53.370 #define SPDK_CONFIG_URING_PATH 00:24:53.370 #undef SPDK_CONFIG_URING_ZNS 00:24:53.370 #undef SPDK_CONFIG_USDT 00:24:53.370 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:24:53.370 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:24:53.370 #undef SPDK_CONFIG_VFIO_USER 00:24:53.370 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:53.370 #define SPDK_CONFIG_VHOST 1 00:24:53.370 #define SPDK_CONFIG_VIRTIO 1 00:24:53.370 #undef SPDK_CONFIG_VTUNE 00:24:53.370 #define SPDK_CONFIG_VTUNE_DIR 00:24:53.370 #define SPDK_CONFIG_WERROR 1 00:24:53.370 #define SPDK_CONFIG_WPDK_DIR 00:24:53.370 #undef SPDK_CONFIG_XNVME 00:24:53.370 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:53.370 21:21:15 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:53.370 21:21:15 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:53.370 21:21:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.370 21:21:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.370 21:21:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.370 21:21:15 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:53.370 21:21:15 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:53.370 21:21:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:53.370 21:21:15 -- paths/export.sh@5 -- # export PATH 00:24:53.370 21:21:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:53.370 21:21:15 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:53.370 21:21:15 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:53.370 21:21:15 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:53.370 21:21:15 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:53.370 21:21:15 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:24:53.370 21:21:15 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:24:53.370 21:21:15 -- pm/common@16 -- # TEST_TAG=N/A 00:24:53.370 21:21:15 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:24:53.370 21:21:15 -- common/autotest_common.sh@52 -- # : 1 00:24:53.370 21:21:15 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:24:53.370 21:21:15 -- common/autotest_common.sh@56 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:53.370 21:21:15 -- common/autotest_common.sh@58 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:24:53.370 21:21:15 -- common/autotest_common.sh@60 -- # : 1 00:24:53.370 21:21:15 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:53.370 21:21:15 -- common/autotest_common.sh@62 -- # : 1 00:24:53.370 21:21:15 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:24:53.370 21:21:15 -- common/autotest_common.sh@64 -- # : 00:24:53.370 21:21:15 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:24:53.370 21:21:15 -- common/autotest_common.sh@66 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:24:53.370 21:21:15 -- common/autotest_common.sh@68 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:24:53.370 21:21:15 -- common/autotest_common.sh@70 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:24:53.370 21:21:15 -- common/autotest_common.sh@72 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:53.370 21:21:15 -- common/autotest_common.sh@74 -- # : 1 00:24:53.370 21:21:15 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:24:53.370 21:21:15 -- common/autotest_common.sh@76 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:24:53.370 21:21:15 -- common/autotest_common.sh@78 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:24:53.370 21:21:15 -- common/autotest_common.sh@80 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:24:53.370 21:21:15 -- common/autotest_common.sh@82 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:24:53.370 21:21:15 -- common/autotest_common.sh@84 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:24:53.370 21:21:15 -- common/autotest_common.sh@86 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:24:53.370 21:21:15 -- common/autotest_common.sh@88 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:24:53.370 21:21:15 -- common/autotest_common.sh@90 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:53.370 21:21:15 -- common/autotest_common.sh@92 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:24:53.370 21:21:15 -- common/autotest_common.sh@94 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:24:53.370 21:21:15 -- common/autotest_common.sh@96 -- # : rdma 00:24:53.370 21:21:15 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:53.370 21:21:15 -- common/autotest_common.sh@98 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:24:53.370 21:21:15 -- common/autotest_common.sh@100 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:24:53.370 21:21:15 -- common/autotest_common.sh@102 -- # : 1 00:24:53.370 21:21:15 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:24:53.370 21:21:15 -- common/autotest_common.sh@104 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:24:53.370 21:21:15 -- common/autotest_common.sh@106 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:24:53.370 21:21:15 -- common/autotest_common.sh@108 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:24:53.370 21:21:15 -- common/autotest_common.sh@110 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:24:53.370 21:21:15 -- common/autotest_common.sh@112 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:53.370 21:21:15 -- common/autotest_common.sh@114 -- # : 1 00:24:53.370 21:21:15 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:24:53.370 21:21:15 -- common/autotest_common.sh@116 -- # : 1 00:24:53.370 21:21:15 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:24:53.370 21:21:15 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:24:53.370 21:21:15 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:53.370 21:21:15 -- common/autotest_common.sh@120 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:24:53.370 21:21:15 -- common/autotest_common.sh@122 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:24:53.370 21:21:15 -- common/autotest_common.sh@124 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:24:53.370 21:21:15 -- common/autotest_common.sh@126 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:24:53.370 21:21:15 -- common/autotest_common.sh@128 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:24:53.370 21:21:15 -- common/autotest_common.sh@130 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:24:53.370 21:21:15 -- common/autotest_common.sh@132 -- # : v23.11 00:24:53.370 21:21:15 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:24:53.370 21:21:15 -- common/autotest_common.sh@134 -- # : true 00:24:53.370 21:21:15 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:24:53.370 21:21:15 -- common/autotest_common.sh@136 -- # : 1 00:24:53.370 21:21:15 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:24:53.370 21:21:15 -- common/autotest_common.sh@138 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:24:53.370 21:21:15 -- common/autotest_common.sh@140 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:24:53.370 21:21:15 -- common/autotest_common.sh@142 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:24:53.370 21:21:15 -- common/autotest_common.sh@144 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:24:53.370 21:21:15 -- common/autotest_common.sh@146 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:24:53.370 21:21:15 -- common/autotest_common.sh@148 -- # : 00:24:53.370 21:21:15 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:24:53.370 21:21:15 -- common/autotest_common.sh@150 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:24:53.370 21:21:15 -- common/autotest_common.sh@152 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:24:53.370 21:21:15 -- common/autotest_common.sh@154 -- # : 0 00:24:53.370 21:21:15 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:24:53.370 21:21:15 -- common/autotest_common.sh@156 -- # : 0 00:24:53.371 21:21:15 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:24:53.371 21:21:15 -- common/autotest_common.sh@158 -- # : 0 00:24:53.371 21:21:15 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:24:53.371 21:21:15 -- common/autotest_common.sh@160 -- # : 0 00:24:53.371 21:21:15 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:24:53.371 21:21:15 -- common/autotest_common.sh@163 -- # : 00:24:53.371 21:21:15 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:24:53.371 21:21:15 -- common/autotest_common.sh@165 -- # : 0 00:24:53.371 21:21:15 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:24:53.371 21:21:15 -- common/autotest_common.sh@167 -- # : 0 00:24:53.371 21:21:15 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:53.371 21:21:15 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:53.371 21:21:15 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:53.371 21:21:15 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:24:53.371 21:21:15 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:24:53.371 21:21:15 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:53.371 21:21:15 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:53.371 21:21:15 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:53.371 21:21:15 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:53.371 21:21:15 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:53.371 21:21:15 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:53.371 21:21:15 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:53.371 21:21:15 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:53.371 21:21:15 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:53.371 21:21:15 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:24:53.371 21:21:15 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:53.371 21:21:15 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:53.371 21:21:15 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:53.371 21:21:15 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:53.371 21:21:15 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:53.371 21:21:15 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:24:53.371 21:21:15 -- common/autotest_common.sh@196 -- # cat 00:24:53.371 21:21:15 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:24:53.371 21:21:15 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:53.371 21:21:15 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:53.371 21:21:15 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:53.371 21:21:15 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:53.371 21:21:15 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:24:53.371 21:21:15 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:24:53.371 21:21:15 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:53.371 21:21:15 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:53.371 21:21:15 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:53.371 21:21:15 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:53.371 21:21:15 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:24:53.371 21:21:15 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:24:53.371 21:21:15 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:53.371 21:21:15 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:53.371 21:21:15 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:53.371 21:21:15 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:53.371 21:21:15 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:53.371 21:21:15 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:53.371 21:21:15 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:24:53.371 21:21:15 -- common/autotest_common.sh@249 -- # export valgrind= 00:24:53.371 21:21:15 -- common/autotest_common.sh@249 -- # valgrind= 00:24:53.371 21:21:15 -- common/autotest_common.sh@255 -- # uname -s 00:24:53.371 21:21:15 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:24:53.371 21:21:15 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:24:53.371 21:21:15 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:24:53.371 21:21:15 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:24:53.371 21:21:15 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:24:53.371 21:21:15 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:24:53.371 21:21:15 -- common/autotest_common.sh@265 -- # MAKE=make 00:24:53.371 21:21:15 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:24:53.371 21:21:15 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:24:53.371 21:21:15 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:24:53.371 21:21:15 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:24:53.371 21:21:15 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:24:53.371 21:21:15 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:24:53.371 21:21:15 -- common/autotest_common.sh@309 -- # [[ -z 146713 ]] 00:24:53.371 21:21:15 -- common/autotest_common.sh@309 -- # kill -0 146713 00:24:53.371 21:21:15 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:24:53.371 21:21:15 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:24:53.371 21:21:15 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:24:53.371 21:21:15 -- common/autotest_common.sh@322 -- # local mount target_dir 00:24:53.371 21:21:15 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:24:53.371 21:21:15 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:24:53.371 21:21:15 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:24:53.371 21:21:15 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:24:53.371 21:21:15 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.ftSaEJ 00:24:53.371 21:21:15 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:53.371 21:21:15 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:24:53.371 21:21:15 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:24:53.371 21:21:15 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.ftSaEJ/tests/interrupt /tmp/spdk.ftSaEJ 00:24:53.371 21:21:15 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:24:53.371 21:21:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:53.371 21:21:15 -- common/autotest_common.sh@318 -- # df -T 00:24:53.371 21:21:15 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:24:53.371 21:21:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=udev 00:24:53.371 21:21:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:24:53.371 21:21:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=6224465920 00:24:53.371 21:21:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6224465920 00:24:53.371 21:21:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:24:53.371 21:21:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:53.371 21:21:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:53.371 21:21:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:53.371 21:21:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249759232 00:24:53.371 21:21:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254514688 00:24:53.371 21:21:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=4755456 00:24:53.371 21:21:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:53.371 21:21:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:24:53.371 21:21:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:24:53.371 21:21:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=8593018880 00:24:53.371 21:21:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:24:53.371 21:21:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=12006998016 00:24:53.371 21:21:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:53.371 21:21:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:53.371 21:21:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:53.371 21:21:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=6271307776 00:24:53.371 21:21:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272565248 00:24:53.371 21:21:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:24:53.371 21:21:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:53.371 21:21:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:53.371 21:21:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:53.371 21:21:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:24:53.371 21:21:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:24:53.372 21:21:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:24:53.372 21:21:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=6272565248 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272565248 00:24:53.372 21:21:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:24:53.372 21:21:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop0 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:24:53.372 21:21:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:24:53.372 21:21:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=103089152 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109422592 00:24:53.372 21:21:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:24:53.372 21:21:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop1 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=96337920 00:24:53.372 21:21:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=96337920 00:24:53.372 21:21:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop2 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=41025536 00:24:53.372 21:21:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=41025536 00:24:53.372 21:21:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254510592 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254510592 00:24:53.372 21:21:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:24:53.372 21:21:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=95662186496 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:24:53.372 21:21:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=4040593408 00:24:53.372 21:21:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop3 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=40763392 00:24:53.372 21:21:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=40763392 00:24:53.372 21:21:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop4 00:24:53.372 21:21:15 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:24:53.372 21:21:15 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:24:53.372 21:21:15 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:24:53.372 21:21:15 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:53.372 21:21:15 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:24:53.372 * Looking for test storage... 00:24:53.372 21:21:15 -- common/autotest_common.sh@359 -- # local target_space new_size 00:24:53.372 21:21:15 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:24:53.372 21:21:15 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:53.372 21:21:15 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:53.372 21:21:15 -- common/autotest_common.sh@363 -- # mount=/ 00:24:53.372 21:21:15 -- common/autotest_common.sh@365 -- # target_space=8593018880 00:24:53.372 21:21:15 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:24:53.372 21:21:15 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:24:53.372 21:21:15 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:24:53.372 21:21:15 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:24:53.372 21:21:15 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:24:53.372 21:21:15 -- common/autotest_common.sh@372 -- # new_size=14221590528 00:24:53.372 21:21:15 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:53.372 21:21:15 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:53.372 21:21:15 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:53.372 21:21:15 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:53.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:53.372 21:21:15 -- common/autotest_common.sh@380 -- # return 0 00:24:53.372 21:21:15 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:24:53.372 21:21:15 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:24:53.372 21:21:15 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:53.372 21:21:15 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:53.372 21:21:15 -- common/autotest_common.sh@1672 -- # true 00:24:53.372 21:21:15 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:24:53.372 21:21:15 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:53.372 21:21:15 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:53.372 21:21:15 -- common/autotest_common.sh@27 -- # exec 00:24:53.372 21:21:15 -- common/autotest_common.sh@29 -- # exec 00:24:53.372 21:21:15 -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:53.372 21:21:15 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:53.372 21:21:15 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:53.372 21:21:15 -- common/autotest_common.sh@18 -- # set -x 00:24:53.372 21:21:15 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:53.372 21:21:15 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:24:53.372 21:21:15 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:24:53.372 21:21:15 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:24:53.372 21:21:15 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:24:53.372 21:21:15 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:53.372 21:21:15 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:53.372 21:21:15 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:53.372 21:21:15 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:24:53.372 21:21:15 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.372 21:21:15 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:24:53.372 21:21:15 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=146753 00:24:53.372 21:21:15 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:53.372 21:21:15 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 146753 /var/tmp/spdk.sock 00:24:53.373 21:21:15 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:53.373 21:21:15 -- common/autotest_common.sh@819 -- # '[' -z 146753 ']' 00:24:53.373 21:21:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.373 21:21:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:53.373 21:21:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.373 21:21:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:53.373 21:21:15 -- common/autotest_common.sh@10 -- # set +x 00:24:53.373 [2024-06-07 21:21:15.634004] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:53.373 [2024-06-07 21:21:15.634259] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146753 ] 00:24:53.373 [2024-06-07 21:21:15.815224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:53.373 [2024-06-07 21:21:15.905035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.373 [2024-06-07 21:21:15.905102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.373 [2024-06-07 21:21:15.905103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.373 [2024-06-07 21:21:15.988290] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:53.940 21:21:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:53.940 21:21:16 -- common/autotest_common.sh@852 -- # return 0 00:24:53.940 21:21:16 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:24:53.940 21:21:16 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:54.198 Malloc0 00:24:54.198 Malloc1 00:24:54.198 Malloc2 00:24:54.198 21:21:16 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:24:54.198 21:21:16 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:24:54.198 21:21:16 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:54.198 21:21:16 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:54.198 5000+0 records in 00:24:54.198 5000+0 records out 00:24:54.198 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0149924 s, 683 MB/s 00:24:54.198 21:21:16 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:24:54.457 AIO0 00:24:54.716 21:21:17 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 146753 00:24:54.716 21:21:17 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 146753 without_thd 00:24:54.716 21:21:17 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=146753 00:24:54.716 21:21:17 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:24:54.716 21:21:17 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:54.716 21:21:17 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:54.716 21:21:17 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:24:54.716 21:21:17 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:54.716 21:21:17 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:24:54.716 21:21:17 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:54.716 21:21:17 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:54.716 21:21:17 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:54.716 21:21:17 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:24:54.716 21:21:17 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:54.716 21:21:17 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:54.716 21:21:17 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:24:54.716 21:21:17 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:54.716 21:21:17 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:24:54.716 21:21:17 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:54.716 21:21:17 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:54.716 21:21:17 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:54.975 21:21:17 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:24:54.975 21:21:17 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:54.975 spdk_thread ids are 1 on reactor0. 00:24:54.975 21:21:17 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:54.975 21:21:17 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:54.975 21:21:17 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 146753 0 00:24:54.975 21:21:17 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 146753 0 idle 00:24:54.975 21:21:17 -- interrupt/interrupt_common.sh@33 -- # local pid=146753 00:24:54.975 21:21:17 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:54.975 21:21:17 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:54.975 21:21:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:54.975 21:21:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:54.975 21:21:17 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:54.975 21:21:17 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:54.975 21:21:17 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:54.975 21:21:17 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 146753 -w 256 00:24:54.975 21:21:17 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 146753 root 20 0 20.1t 75500 26008 S 0.0 0.6 0:00.35 reactor_0' 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@48 -- # echo 146753 root 20 0 20.1t 75500 26008 S 0.0 0.6 0:00.35 reactor_0 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:55.234 21:21:17 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:55.234 21:21:17 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 146753 1 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 146753 1 idle 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@33 -- # local pid=146753 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 146753 -w 256 00:24:55.234 21:21:17 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 146762 root 20 0 20.1t 75500 26008 S 0.0 0.6 0:00.00 reactor_1' 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@48 -- # echo 146762 root 20 0 20.1t 75500 26008 S 0.0 0.6 0:00.00 reactor_1 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:55.493 21:21:17 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:55.493 21:21:17 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 146753 2 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 146753 2 idle 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@33 -- # local pid=146753 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 146753 -w 256 00:24:55.493 21:21:17 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:55.493 21:21:18 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 146763 root 20 0 20.1t 75500 26008 S 0.0 0.6 0:00.00 reactor_2' 00:24:55.493 21:21:18 -- interrupt/interrupt_common.sh@48 -- # echo 146763 root 20 0 20.1t 75500 26008 S 0.0 0.6 0:00.00 reactor_2 00:24:55.493 21:21:18 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:55.493 21:21:18 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:55.493 21:21:18 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:55.493 21:21:18 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:55.493 21:21:18 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:55.493 21:21:18 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:55.493 21:21:18 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:55.493 21:21:18 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:55.493 21:21:18 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:24:55.493 21:21:18 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:24:55.493 21:21:18 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:24:55.752 [2024-06-07 21:21:18.352770] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:55.752 21:21:18 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:56.011 [2024-06-07 21:21:18.612777] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:56.011 [2024-06-07 21:21:18.613403] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:56.011 21:21:18 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:56.270 [2024-06-07 21:21:18.884592] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:56.270 [2024-06-07 21:21:18.885294] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:56.270 21:21:18 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:56.270 21:21:18 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 146753 0 00:24:56.270 21:21:18 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 146753 0 busy 00:24:56.270 21:21:18 -- interrupt/interrupt_common.sh@33 -- # local pid=146753 00:24:56.270 21:21:18 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:56.270 21:21:18 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:24:56.270 21:21:18 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:24:56.270 21:21:18 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:56.270 21:21:18 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:56.270 21:21:18 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:56.270 21:21:18 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 146753 -w 256 00:24:56.270 21:21:18 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 146753 root 20 0 20.1t 75640 26008 R 99.9 0.6 0:00.80 reactor_0' 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@48 -- # echo 146753 root 20 0 20.1t 75640 26008 R 99.9 0.6 0:00.80 reactor_0 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:56.528 21:21:19 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:56.528 21:21:19 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 146753 2 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 146753 2 busy 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@33 -- # local pid=146753 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 146753 -w 256 00:24:56.528 21:21:19 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:56.787 21:21:19 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 146763 root 20 0 20.1t 75640 26008 R 87.5 0.6 0:00.33 reactor_2' 00:24:56.787 21:21:19 -- interrupt/interrupt_common.sh@48 -- # echo 146763 root 20 0 20.1t 75640 26008 R 87.5 0.6 0:00.33 reactor_2 00:24:56.787 21:21:19 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:56.787 21:21:19 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:56.787 21:21:19 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=87.5 00:24:56.787 21:21:19 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=87 00:24:56.787 21:21:19 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:24:56.787 21:21:19 -- interrupt/interrupt_common.sh@51 -- # [[ 87 -lt 70 ]] 00:24:56.787 21:21:19 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:24:56.787 21:21:19 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:56.787 21:21:19 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:56.787 [2024-06-07 21:21:19.448618] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:56.787 [2024-06-07 21:21:19.448994] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:57.046 21:21:19 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:24:57.046 21:21:19 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 146753 2 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 146753 2 idle 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@33 -- # local pid=146753 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 146753 -w 256 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 146763 root 20 0 20.1t 75708 26008 S 0.0 0.6 0:00.56 reactor_2' 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@48 -- # echo 146763 root 20 0 20.1t 75708 26008 S 0.0 0.6 0:00.56 reactor_2 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:57.046 21:21:19 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:57.046 21:21:19 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:57.305 [2024-06-07 21:21:19.884630] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:57.305 [2024-06-07 21:21:19.885186] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:57.305 21:21:19 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:24:57.305 21:21:19 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:24:57.305 21:21:19 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:24:57.563 [2024-06-07 21:21:20.144991] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:57.563 21:21:20 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 146753 0 00:24:57.563 21:21:20 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 146753 0 idle 00:24:57.563 21:21:20 -- interrupt/interrupt_common.sh@33 -- # local pid=146753 00:24:57.563 21:21:20 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:57.563 21:21:20 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:57.563 21:21:20 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:57.563 21:21:20 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:57.563 21:21:20 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:57.563 21:21:20 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:57.563 21:21:20 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:57.563 21:21:20 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:57.563 21:21:20 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 146753 -w 256 00:24:57.822 21:21:20 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 146753 root 20 0 20.1t 75800 26008 S 0.0 0.6 0:01.63 reactor_0' 00:24:57.822 21:21:20 -- interrupt/interrupt_common.sh@48 -- # echo 146753 root 20 0 20.1t 75800 26008 S 0.0 0.6 0:01.63 reactor_0 00:24:57.822 21:21:20 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:57.822 21:21:20 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:57.822 21:21:20 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:57.822 21:21:20 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:57.822 21:21:20 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:57.822 21:21:20 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:57.822 21:21:20 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:57.822 21:21:20 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:57.822 21:21:20 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:57.822 21:21:20 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:24:57.822 21:21:20 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:24:57.822 21:21:20 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 146753 00:24:57.822 21:21:20 -- common/autotest_common.sh@926 -- # '[' -z 146753 ']' 00:24:57.822 21:21:20 -- common/autotest_common.sh@930 -- # kill -0 146753 00:24:57.822 21:21:20 -- common/autotest_common.sh@931 -- # uname 00:24:57.822 21:21:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:57.822 21:21:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146753 00:24:57.822 21:21:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:57.822 21:21:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:57.822 21:21:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146753' 00:24:57.822 killing process with pid 146753 00:24:57.822 21:21:20 -- common/autotest_common.sh@945 -- # kill 146753 00:24:57.822 21:21:20 -- common/autotest_common.sh@950 -- # wait 146753 00:24:58.081 21:21:20 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:24:58.081 21:21:20 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:24:58.081 21:21:20 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:24:58.081 21:21:20 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.081 21:21:20 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:24:58.081 21:21:20 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=146898 00:24:58.081 21:21:20 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:58.081 21:21:20 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 146898 /var/tmp/spdk.sock 00:24:58.081 21:21:20 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:58.081 21:21:20 -- common/autotest_common.sh@819 -- # '[' -z 146898 ']' 00:24:58.081 21:21:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.081 21:21:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:58.081 21:21:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.081 21:21:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:58.081 21:21:20 -- common/autotest_common.sh@10 -- # set +x 00:24:58.081 [2024-06-07 21:21:20.687386] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:58.081 [2024-06-07 21:21:20.687726] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146898 ] 00:24:58.340 [2024-06-07 21:21:20.854760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:58.340 [2024-06-07 21:21:20.934525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.340 [2024-06-07 21:21:20.935042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.340 [2024-06-07 21:21:20.935093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.599 [2024-06-07 21:21:21.022181] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:59.166 21:21:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:59.166 21:21:21 -- common/autotest_common.sh@852 -- # return 0 00:24:59.166 21:21:21 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:24:59.166 21:21:21 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.426 Malloc0 00:24:59.426 Malloc1 00:24:59.426 Malloc2 00:24:59.426 21:21:21 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:24:59.426 21:21:21 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:24:59.426 21:21:21 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:59.426 21:21:21 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:59.426 5000+0 records in 00:24:59.426 5000+0 records out 00:24:59.426 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0271145 s, 378 MB/s 00:24:59.426 21:21:22 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:24:59.692 AIO0 00:24:59.692 21:21:22 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 146898 00:24:59.692 21:21:22 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 146898 00:24:59.692 21:21:22 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=146898 00:24:59.692 21:21:22 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:24:59.692 21:21:22 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:59.692 21:21:22 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:59.692 21:21:22 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:24:59.692 21:21:22 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:59.692 21:21:22 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:24:59.692 21:21:22 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:59.692 21:21:22 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:59.692 21:21:22 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:59.963 21:21:22 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:24:59.963 21:21:22 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:59.963 21:21:22 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:59.963 21:21:22 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:24:59.963 21:21:22 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:59.963 21:21:22 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:24:59.963 21:21:22 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:59.963 21:21:22 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:59.963 21:21:22 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:00.222 21:21:22 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:25:00.222 spdk_thread ids are 1 on reactor0. 00:25:00.222 21:21:22 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:25:00.222 21:21:22 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:25:00.222 21:21:22 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:00.222 21:21:22 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 146898 0 00:25:00.222 21:21:22 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 146898 0 idle 00:25:00.222 21:21:22 -- interrupt/interrupt_common.sh@33 -- # local pid=146898 00:25:00.222 21:21:22 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:00.222 21:21:22 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:00.222 21:21:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:00.222 21:21:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:00.222 21:21:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:00.222 21:21:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:00.222 21:21:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:00.222 21:21:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:00.222 21:21:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 146898 -w 256 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 146898 root 20 0 20.1t 75360 25992 S 6.7 0.6 0:00.34 reactor_0' 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@48 -- # echo 146898 root 20 0 20.1t 75360 25992 S 6.7 0.6 0:00.34 reactor_0 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:00.481 21:21:22 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:00.481 21:21:22 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 146898 1 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 146898 1 idle 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@33 -- # local pid=146898 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 146898 -w 256 00:25:00.481 21:21:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:25:00.481 21:21:23 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 146902 root 20 0 20.1t 75360 25992 S 0.0 0.6 0:00.00 reactor_1' 00:25:00.481 21:21:23 -- interrupt/interrupt_common.sh@48 -- # echo 146902 root 20 0 20.1t 75360 25992 S 0.0 0.6 0:00.00 reactor_1 00:25:00.481 21:21:23 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:00.481 21:21:23 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:00.481 21:21:23 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:00.481 21:21:23 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:00.481 21:21:23 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:00.481 21:21:23 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:00.481 21:21:23 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:00.481 21:21:23 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:00.481 21:21:23 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:00.482 21:21:23 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 146898 2 00:25:00.482 21:21:23 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 146898 2 idle 00:25:00.482 21:21:23 -- interrupt/interrupt_common.sh@33 -- # local pid=146898 00:25:00.482 21:21:23 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:00.482 21:21:23 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:00.482 21:21:23 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:00.482 21:21:23 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:00.482 21:21:23 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:00.482 21:21:23 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:00.482 21:21:23 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:00.482 21:21:23 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 146898 -w 256 00:25:00.482 21:21:23 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:00.739 21:21:23 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 146903 root 20 0 20.1t 75360 25992 S 0.0 0.6 0:00.00 reactor_2' 00:25:00.739 21:21:23 -- interrupt/interrupt_common.sh@48 -- # echo 146903 root 20 0 20.1t 75360 25992 S 0.0 0.6 0:00.00 reactor_2 00:25:00.739 21:21:23 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:00.739 21:21:23 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:00.739 21:21:23 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:00.739 21:21:23 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:00.739 21:21:23 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:00.739 21:21:23 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:00.739 21:21:23 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:00.739 21:21:23 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:00.739 21:21:23 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:25:00.739 21:21:23 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:25:00.998 [2024-06-07 21:21:23.570518] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:25:00.998 [2024-06-07 21:21:23.570829] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:25:00.998 [2024-06-07 21:21:23.571070] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:00.998 21:21:23 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:25:01.256 [2024-06-07 21:21:23.826423] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:25:01.256 [2024-06-07 21:21:23.826938] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:01.256 21:21:23 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:25:01.256 21:21:23 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 146898 0 00:25:01.256 21:21:23 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 146898 0 busy 00:25:01.256 21:21:23 -- interrupt/interrupt_common.sh@33 -- # local pid=146898 00:25:01.256 21:21:23 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:01.256 21:21:23 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:25:01.256 21:21:23 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:25:01.256 21:21:23 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:01.256 21:21:23 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:01.256 21:21:23 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:01.256 21:21:23 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 146898 -w 256 00:25:01.256 21:21:23 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 146898 root 20 0 20.1t 75464 25992 R 93.3 0.6 0:00.76 reactor_0' 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@48 -- # echo 146898 root 20 0 20.1t 75464 25992 R 93.3 0.6 0:00.76 reactor_0 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.3 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:01.514 21:21:24 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:25:01.514 21:21:24 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 146898 2 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 146898 2 busy 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@33 -- # local pid=146898 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 146898 -w 256 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 146903 root 20 0 20.1t 75464 25992 R 99.9 0.6 0:00.34 reactor_2' 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@48 -- # echo 146903 root 20 0 20.1t 75464 25992 R 99.9 0.6 0:00.34 reactor_2 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:25:01.514 21:21:24 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:01.514 21:21:24 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:25:01.773 [2024-06-07 21:21:24.393966] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:25:01.773 [2024-06-07 21:21:24.394418] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:01.773 21:21:24 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:25:01.773 21:21:24 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 146898 2 00:25:01.773 21:21:24 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 146898 2 idle 00:25:01.773 21:21:24 -- interrupt/interrupt_common.sh@33 -- # local pid=146898 00:25:01.773 21:21:24 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:01.773 21:21:24 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:01.773 21:21:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:01.773 21:21:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:01.773 21:21:24 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:01.773 21:21:24 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:01.773 21:21:24 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:01.773 21:21:24 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 146898 -w 256 00:25:01.773 21:21:24 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:02.032 21:21:24 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 146903 root 20 0 20.1t 75552 25992 S 0.0 0.6 0:00.56 reactor_2' 00:25:02.032 21:21:24 -- interrupt/interrupt_common.sh@48 -- # echo 146903 root 20 0 20.1t 75552 25992 S 0.0 0.6 0:00.56 reactor_2 00:25:02.032 21:21:24 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:02.032 21:21:24 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:02.032 21:21:24 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:02.032 21:21:24 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:02.032 21:21:24 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:02.032 21:21:24 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:02.032 21:21:24 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:02.032 21:21:24 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:02.032 21:21:24 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:25:02.291 [2024-06-07 21:21:24.861987] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:25:02.291 [2024-06-07 21:21:24.862542] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:25:02.291 [2024-06-07 21:21:24.862605] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:02.291 21:21:24 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:25:02.291 21:21:24 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 146898 0 00:25:02.291 21:21:24 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 146898 0 idle 00:25:02.291 21:21:24 -- interrupt/interrupt_common.sh@33 -- # local pid=146898 00:25:02.291 21:21:24 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:02.291 21:21:24 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:02.291 21:21:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:02.291 21:21:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:02.291 21:21:24 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:02.291 21:21:24 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:02.292 21:21:24 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:02.292 21:21:24 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 146898 -w 256 00:25:02.292 21:21:24 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:02.550 21:21:25 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 146898 root 20 0 20.1t 75592 25992 S 0.0 0.6 0:01.63 reactor_0' 00:25:02.550 21:21:25 -- interrupt/interrupt_common.sh@48 -- # echo 146898 root 20 0 20.1t 75592 25992 S 0.0 0.6 0:01.63 reactor_0 00:25:02.550 21:21:25 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:02.550 21:21:25 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:02.550 21:21:25 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:02.550 21:21:25 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:02.550 21:21:25 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:02.550 21:21:25 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:02.550 21:21:25 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:02.550 21:21:25 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:02.550 21:21:25 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:25:02.550 21:21:25 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:25:02.550 21:21:25 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:25:02.550 21:21:25 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 146898 00:25:02.550 21:21:25 -- common/autotest_common.sh@926 -- # '[' -z 146898 ']' 00:25:02.550 21:21:25 -- common/autotest_common.sh@930 -- # kill -0 146898 00:25:02.551 21:21:25 -- common/autotest_common.sh@931 -- # uname 00:25:02.551 21:21:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:02.551 21:21:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146898 00:25:02.551 killing process with pid 146898 00:25:02.551 21:21:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:02.551 21:21:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:02.551 21:21:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146898' 00:25:02.551 21:21:25 -- common/autotest_common.sh@945 -- # kill 146898 00:25:02.551 21:21:25 -- common/autotest_common.sh@950 -- # wait 146898 00:25:03.119 21:21:25 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:25:03.119 21:21:25 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:25:03.119 ************************************ 00:25:03.119 END TEST reactor_set_interrupt 00:25:03.119 ************************************ 00:25:03.119 00:25:03.119 real 0m10.137s 00:25:03.119 user 0m10.010s 00:25:03.119 sys 0m1.548s 00:25:03.119 21:21:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:03.119 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:25:03.119 21:21:25 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:25:03.119 21:21:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:03.119 21:21:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:03.119 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:25:03.119 ************************************ 00:25:03.120 START TEST reap_unregistered_poller 00:25:03.120 ************************************ 00:25:03.120 21:21:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:25:03.120 * Looking for test storage... 00:25:03.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:03.120 21:21:25 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:25:03.120 21:21:25 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:25:03.120 21:21:25 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:03.120 21:21:25 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:03.120 21:21:25 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:25:03.120 21:21:25 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:03.120 21:21:25 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:25:03.120 21:21:25 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:25:03.120 21:21:25 -- common/autotest_common.sh@34 -- # set -e 00:25:03.120 21:21:25 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:25:03.120 21:21:25 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:25:03.120 21:21:25 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:25:03.120 21:21:25 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:25:03.120 21:21:25 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:25:03.120 21:21:25 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:25:03.120 21:21:25 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:25:03.120 21:21:25 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:25:03.120 21:21:25 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:25:03.120 21:21:25 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:25:03.120 21:21:25 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:25:03.120 21:21:25 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:25:03.120 21:21:25 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:25:03.120 21:21:25 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:25:03.120 21:21:25 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:25:03.120 21:21:25 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:25:03.120 21:21:25 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:25:03.120 21:21:25 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:25:03.120 21:21:25 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:25:03.120 21:21:25 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:25:03.120 21:21:25 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:25:03.120 21:21:25 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:25:03.120 21:21:25 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:25:03.120 21:21:25 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:25:03.120 21:21:25 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:25:03.120 21:21:25 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:25:03.120 21:21:25 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:25:03.120 21:21:25 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:25:03.120 21:21:25 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:25:03.120 21:21:25 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:25:03.120 21:21:25 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:25:03.120 21:21:25 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:25:03.120 21:21:25 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:25:03.120 21:21:25 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:25:03.120 21:21:25 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:25:03.120 21:21:25 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:25:03.120 21:21:25 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:25:03.120 21:21:25 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:25:03.120 21:21:25 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:25:03.120 21:21:25 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:25:03.120 21:21:25 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:25:03.120 21:21:25 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:25:03.120 21:21:25 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:25:03.120 21:21:25 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:25:03.120 21:21:25 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:25:03.120 21:21:25 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:25:03.120 21:21:25 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:25:03.120 21:21:25 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:25:03.120 21:21:25 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:25:03.120 21:21:25 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:25:03.120 21:21:25 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:25:03.120 21:21:25 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:25:03.120 21:21:25 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:25:03.120 21:21:25 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:25:03.120 21:21:25 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:25:03.120 21:21:25 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:25:03.120 21:21:25 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:25:03.120 21:21:25 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:25:03.120 21:21:25 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:25:03.120 21:21:25 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:25:03.120 21:21:25 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:25:03.120 21:21:25 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:25:03.120 21:21:25 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:25:03.120 21:21:25 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:25:03.120 21:21:25 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:25:03.120 21:21:25 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:25:03.120 21:21:25 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:25:03.120 21:21:25 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:25:03.120 21:21:25 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:25:03.120 21:21:25 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:25:03.120 21:21:25 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:25:03.120 21:21:25 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:25:03.120 21:21:25 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:25:03.120 21:21:25 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:25:03.120 21:21:25 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:25:03.120 21:21:25 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:25:03.120 21:21:25 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:25:03.120 21:21:25 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:25:03.120 21:21:25 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:25:03.120 21:21:25 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:25:03.120 21:21:25 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:25:03.120 21:21:25 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:25:03.120 21:21:25 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:25:03.120 21:21:25 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:25:03.120 21:21:25 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:25:03.120 21:21:25 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:25:03.120 21:21:25 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:25:03.120 21:21:25 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:25:03.120 21:21:25 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:25:03.120 21:21:25 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:25:03.120 21:21:25 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:25:03.120 21:21:25 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:25:03.120 21:21:25 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:25:03.120 21:21:25 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:25:03.120 21:21:25 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:25:03.120 21:21:25 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:25:03.120 21:21:25 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:25:03.120 21:21:25 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:25:03.120 21:21:25 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:25:03.120 #define SPDK_CONFIG_H 00:25:03.120 #define SPDK_CONFIG_APPS 1 00:25:03.120 #define SPDK_CONFIG_ARCH native 00:25:03.120 #define SPDK_CONFIG_ASAN 1 00:25:03.120 #undef SPDK_CONFIG_AVAHI 00:25:03.120 #undef SPDK_CONFIG_CET 00:25:03.120 #define SPDK_CONFIG_COVERAGE 1 00:25:03.120 #define SPDK_CONFIG_CROSS_PREFIX 00:25:03.120 #undef SPDK_CONFIG_CRYPTO 00:25:03.120 #undef SPDK_CONFIG_CRYPTO_MLX5 00:25:03.120 #undef SPDK_CONFIG_CUSTOMOCF 00:25:03.120 #undef SPDK_CONFIG_DAOS 00:25:03.120 #define SPDK_CONFIG_DAOS_DIR 00:25:03.120 #define SPDK_CONFIG_DEBUG 1 00:25:03.120 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:25:03.120 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:25:03.120 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:25:03.120 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:25:03.120 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:25:03.120 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:25:03.120 #define SPDK_CONFIG_EXAMPLES 1 00:25:03.120 #undef SPDK_CONFIG_FC 00:25:03.120 #define SPDK_CONFIG_FC_PATH 00:25:03.120 #define SPDK_CONFIG_FIO_PLUGIN 1 00:25:03.120 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:25:03.120 #undef SPDK_CONFIG_FUSE 00:25:03.120 #undef SPDK_CONFIG_FUZZER 00:25:03.120 #define SPDK_CONFIG_FUZZER_LIB 00:25:03.120 #undef SPDK_CONFIG_GOLANG 00:25:03.120 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:25:03.120 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:25:03.120 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:25:03.120 #undef SPDK_CONFIG_HAVE_LIBBSD 00:25:03.120 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:25:03.120 #define SPDK_CONFIG_IDXD 1 00:25:03.120 #undef SPDK_CONFIG_IDXD_KERNEL 00:25:03.120 #undef SPDK_CONFIG_IPSEC_MB 00:25:03.121 #define SPDK_CONFIG_IPSEC_MB_DIR 00:25:03.121 #define SPDK_CONFIG_ISAL 1 00:25:03.121 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:25:03.121 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:25:03.121 #define SPDK_CONFIG_LIBDIR 00:25:03.121 #undef SPDK_CONFIG_LTO 00:25:03.121 #define SPDK_CONFIG_MAX_LCORES 00:25:03.121 #define SPDK_CONFIG_NVME_CUSE 1 00:25:03.121 #undef SPDK_CONFIG_OCF 00:25:03.121 #define SPDK_CONFIG_OCF_PATH 00:25:03.121 #define SPDK_CONFIG_OPENSSL_PATH 00:25:03.121 #undef SPDK_CONFIG_PGO_CAPTURE 00:25:03.121 #undef SPDK_CONFIG_PGO_USE 00:25:03.121 #define SPDK_CONFIG_PREFIX /usr/local 00:25:03.121 #define SPDK_CONFIG_RAID5F 1 00:25:03.121 #undef SPDK_CONFIG_RBD 00:25:03.121 #define SPDK_CONFIG_RDMA 1 00:25:03.121 #define SPDK_CONFIG_RDMA_PROV verbs 00:25:03.121 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:25:03.121 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:25:03.121 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:25:03.121 #undef SPDK_CONFIG_SHARED 00:25:03.121 #undef SPDK_CONFIG_SMA 00:25:03.121 #define SPDK_CONFIG_TESTS 1 00:25:03.121 #undef SPDK_CONFIG_TSAN 00:25:03.121 #undef SPDK_CONFIG_UBLK 00:25:03.121 #define SPDK_CONFIG_UBSAN 1 00:25:03.121 #define SPDK_CONFIG_UNIT_TESTS 1 00:25:03.121 #undef SPDK_CONFIG_URING 00:25:03.121 #define SPDK_CONFIG_URING_PATH 00:25:03.121 #undef SPDK_CONFIG_URING_ZNS 00:25:03.121 #undef SPDK_CONFIG_USDT 00:25:03.121 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:25:03.121 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:25:03.121 #undef SPDK_CONFIG_VFIO_USER 00:25:03.121 #define SPDK_CONFIG_VFIO_USER_DIR 00:25:03.121 #define SPDK_CONFIG_VHOST 1 00:25:03.121 #define SPDK_CONFIG_VIRTIO 1 00:25:03.121 #undef SPDK_CONFIG_VTUNE 00:25:03.121 #define SPDK_CONFIG_VTUNE_DIR 00:25:03.121 #define SPDK_CONFIG_WERROR 1 00:25:03.121 #define SPDK_CONFIG_WPDK_DIR 00:25:03.121 #undef SPDK_CONFIG_XNVME 00:25:03.121 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:25:03.121 21:21:25 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:25:03.121 21:21:25 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:03.121 21:21:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.121 21:21:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.121 21:21:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.121 21:21:25 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:03.121 21:21:25 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:03.121 21:21:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:03.121 21:21:25 -- paths/export.sh@5 -- # export PATH 00:25:03.121 21:21:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:03.121 21:21:25 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:25:03.121 21:21:25 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:25:03.121 21:21:25 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:25:03.121 21:21:25 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:25:03.121 21:21:25 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:25:03.121 21:21:25 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:25:03.121 21:21:25 -- pm/common@16 -- # TEST_TAG=N/A 00:25:03.121 21:21:25 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:25:03.121 21:21:25 -- common/autotest_common.sh@52 -- # : 1 00:25:03.121 21:21:25 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:25:03.121 21:21:25 -- common/autotest_common.sh@56 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:25:03.121 21:21:25 -- common/autotest_common.sh@58 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:25:03.121 21:21:25 -- common/autotest_common.sh@60 -- # : 1 00:25:03.121 21:21:25 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:25:03.121 21:21:25 -- common/autotest_common.sh@62 -- # : 1 00:25:03.121 21:21:25 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:25:03.121 21:21:25 -- common/autotest_common.sh@64 -- # : 00:25:03.121 21:21:25 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:25:03.121 21:21:25 -- common/autotest_common.sh@66 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:25:03.121 21:21:25 -- common/autotest_common.sh@68 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:25:03.121 21:21:25 -- common/autotest_common.sh@70 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:25:03.121 21:21:25 -- common/autotest_common.sh@72 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:25:03.121 21:21:25 -- common/autotest_common.sh@74 -- # : 1 00:25:03.121 21:21:25 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:25:03.121 21:21:25 -- common/autotest_common.sh@76 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:25:03.121 21:21:25 -- common/autotest_common.sh@78 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:25:03.121 21:21:25 -- common/autotest_common.sh@80 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:25:03.121 21:21:25 -- common/autotest_common.sh@82 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:25:03.121 21:21:25 -- common/autotest_common.sh@84 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:25:03.121 21:21:25 -- common/autotest_common.sh@86 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:25:03.121 21:21:25 -- common/autotest_common.sh@88 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:25:03.121 21:21:25 -- common/autotest_common.sh@90 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:25:03.121 21:21:25 -- common/autotest_common.sh@92 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:25:03.121 21:21:25 -- common/autotest_common.sh@94 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:25:03.121 21:21:25 -- common/autotest_common.sh@96 -- # : rdma 00:25:03.121 21:21:25 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:25:03.121 21:21:25 -- common/autotest_common.sh@98 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:25:03.121 21:21:25 -- common/autotest_common.sh@100 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:25:03.121 21:21:25 -- common/autotest_common.sh@102 -- # : 1 00:25:03.121 21:21:25 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:25:03.121 21:21:25 -- common/autotest_common.sh@104 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:25:03.121 21:21:25 -- common/autotest_common.sh@106 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:25:03.121 21:21:25 -- common/autotest_common.sh@108 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:25:03.121 21:21:25 -- common/autotest_common.sh@110 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:25:03.121 21:21:25 -- common/autotest_common.sh@112 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:25:03.121 21:21:25 -- common/autotest_common.sh@114 -- # : 1 00:25:03.121 21:21:25 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:25:03.121 21:21:25 -- common/autotest_common.sh@116 -- # : 1 00:25:03.121 21:21:25 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:25:03.121 21:21:25 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:25:03.121 21:21:25 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:25:03.121 21:21:25 -- common/autotest_common.sh@120 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:25:03.121 21:21:25 -- common/autotest_common.sh@122 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:25:03.121 21:21:25 -- common/autotest_common.sh@124 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:25:03.121 21:21:25 -- common/autotest_common.sh@126 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:25:03.121 21:21:25 -- common/autotest_common.sh@128 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:25:03.121 21:21:25 -- common/autotest_common.sh@130 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:25:03.121 21:21:25 -- common/autotest_common.sh@132 -- # : v23.11 00:25:03.121 21:21:25 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:25:03.121 21:21:25 -- common/autotest_common.sh@134 -- # : true 00:25:03.121 21:21:25 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:25:03.121 21:21:25 -- common/autotest_common.sh@136 -- # : 1 00:25:03.121 21:21:25 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:25:03.121 21:21:25 -- common/autotest_common.sh@138 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:25:03.121 21:21:25 -- common/autotest_common.sh@140 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:25:03.121 21:21:25 -- common/autotest_common.sh@142 -- # : 0 00:25:03.121 21:21:25 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:25:03.122 21:21:25 -- common/autotest_common.sh@144 -- # : 0 00:25:03.122 21:21:25 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:25:03.122 21:21:25 -- common/autotest_common.sh@146 -- # : 0 00:25:03.122 21:21:25 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:25:03.122 21:21:25 -- common/autotest_common.sh@148 -- # : 00:25:03.122 21:21:25 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:25:03.122 21:21:25 -- common/autotest_common.sh@150 -- # : 0 00:25:03.122 21:21:25 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:25:03.122 21:21:25 -- common/autotest_common.sh@152 -- # : 0 00:25:03.122 21:21:25 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:25:03.122 21:21:25 -- common/autotest_common.sh@154 -- # : 0 00:25:03.122 21:21:25 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:25:03.122 21:21:25 -- common/autotest_common.sh@156 -- # : 0 00:25:03.122 21:21:25 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:25:03.122 21:21:25 -- common/autotest_common.sh@158 -- # : 0 00:25:03.122 21:21:25 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:25:03.122 21:21:25 -- common/autotest_common.sh@160 -- # : 0 00:25:03.122 21:21:25 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:25:03.122 21:21:25 -- common/autotest_common.sh@163 -- # : 00:25:03.122 21:21:25 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:25:03.122 21:21:25 -- common/autotest_common.sh@165 -- # : 0 00:25:03.122 21:21:25 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:25:03.122 21:21:25 -- common/autotest_common.sh@167 -- # : 0 00:25:03.122 21:21:25 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:25:03.122 21:21:25 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:25:03.122 21:21:25 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:25:03.122 21:21:25 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:25:03.122 21:21:25 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:25:03.122 21:21:25 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:03.122 21:21:25 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:03.122 21:21:25 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:03.122 21:21:25 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:03.122 21:21:25 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:25:03.122 21:21:25 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:25:03.122 21:21:25 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:03.122 21:21:25 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:03.122 21:21:25 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:25:03.122 21:21:25 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:25:03.122 21:21:25 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:25:03.122 21:21:25 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:25:03.122 21:21:25 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:25:03.122 21:21:25 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:25:03.122 21:21:25 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:25:03.122 21:21:25 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:25:03.122 21:21:25 -- common/autotest_common.sh@196 -- # cat 00:25:03.122 21:21:25 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:25:03.122 21:21:25 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:25:03.122 21:21:25 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:25:03.122 21:21:25 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:25:03.122 21:21:25 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:25:03.122 21:21:25 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:25:03.122 21:21:25 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:25:03.122 21:21:25 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:25:03.122 21:21:25 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:25:03.122 21:21:25 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:25:03.122 21:21:25 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:25:03.122 21:21:25 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:25:03.122 21:21:25 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:25:03.122 21:21:25 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:25:03.122 21:21:25 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:25:03.122 21:21:25 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:25:03.122 21:21:25 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:25:03.122 21:21:25 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:03.122 21:21:25 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:03.122 21:21:25 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:25:03.122 21:21:25 -- common/autotest_common.sh@249 -- # export valgrind= 00:25:03.122 21:21:25 -- common/autotest_common.sh@249 -- # valgrind= 00:25:03.122 21:21:25 -- common/autotest_common.sh@255 -- # uname -s 00:25:03.122 21:21:25 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:25:03.122 21:21:25 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:25:03.122 21:21:25 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:25:03.122 21:21:25 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:25:03.122 21:21:25 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:25:03.122 21:21:25 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:25:03.122 21:21:25 -- common/autotest_common.sh@265 -- # MAKE=make 00:25:03.122 21:21:25 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:25:03.122 21:21:25 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:25:03.122 21:21:25 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:25:03.122 21:21:25 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:25:03.122 21:21:25 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:25:03.122 21:21:25 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:25:03.122 21:21:25 -- common/autotest_common.sh@309 -- # [[ -z 147083 ]] 00:25:03.122 21:21:25 -- common/autotest_common.sh@309 -- # kill -0 147083 00:25:03.122 21:21:25 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:25:03.122 21:21:25 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:25:03.122 21:21:25 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:25:03.122 21:21:25 -- common/autotest_common.sh@322 -- # local mount target_dir 00:25:03.122 21:21:25 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:25:03.122 21:21:25 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:25:03.122 21:21:25 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:25:03.122 21:21:25 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:25:03.122 21:21:25 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.D5auFi 00:25:03.122 21:21:25 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:25:03.122 21:21:25 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:25:03.122 21:21:25 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:25:03.122 21:21:25 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.D5auFi/tests/interrupt /tmp/spdk.D5auFi 00:25:03.122 21:21:25 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:25:03.122 21:21:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:03.122 21:21:25 -- common/autotest_common.sh@318 -- # df -T 00:25:03.122 21:21:25 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:25:03.122 21:21:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=udev 00:25:03.122 21:21:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:25:03.122 21:21:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=6224465920 00:25:03.122 21:21:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6224465920 00:25:03.122 21:21:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:25:03.122 21:21:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:03.122 21:21:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:03.122 21:21:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:03.122 21:21:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249759232 00:25:03.122 21:21:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254514688 00:25:03.122 21:21:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=4755456 00:25:03.122 21:21:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:03.122 21:21:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:25:03.122 21:21:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:25:03.122 21:21:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=8592982016 00:25:03.122 21:21:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:25:03.122 21:21:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=12007034880 00:25:03.122 21:21:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:03.122 21:21:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=6271307776 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272565248 00:25:03.123 21:21:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:25:03.123 21:21:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:25:03.123 21:21:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:25:03.123 21:21:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=6272565248 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272565248 00:25:03.123 21:21:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:25:03.123 21:21:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop0 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:25:03.123 21:21:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:25:03.123 21:21:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=103089152 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109422592 00:25:03.123 21:21:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:25:03.123 21:21:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop1 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=96337920 00:25:03.123 21:21:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=96337920 00:25:03.123 21:21:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop2 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=41025536 00:25:03.123 21:21:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=41025536 00:25:03.123 21:21:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254510592 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254510592 00:25:03.123 21:21:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:25:03.123 21:21:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=95661981696 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:25:03.123 21:21:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=4040798208 00:25:03.123 21:21:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop3 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=40763392 00:25:03.123 21:21:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=40763392 00:25:03.123 21:21:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop4 00:25:03.123 21:21:25 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:25:03.123 21:21:25 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:25:03.123 21:21:25 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:25:03.123 21:21:25 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:03.123 21:21:25 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:25:03.123 * Looking for test storage... 00:25:03.123 21:21:25 -- common/autotest_common.sh@359 -- # local target_space new_size 00:25:03.123 21:21:25 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:25:03.123 21:21:25 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:03.123 21:21:25 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:25:03.123 21:21:25 -- common/autotest_common.sh@363 -- # mount=/ 00:25:03.123 21:21:25 -- common/autotest_common.sh@365 -- # target_space=8592982016 00:25:03.123 21:21:25 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:25:03.123 21:21:25 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:25:03.123 21:21:25 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:25:03.123 21:21:25 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:25:03.123 21:21:25 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:25:03.123 21:21:25 -- common/autotest_common.sh@372 -- # new_size=14221627392 00:25:03.123 21:21:25 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:25:03.123 21:21:25 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:03.123 21:21:25 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:03.123 21:21:25 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:03.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:03.123 21:21:25 -- common/autotest_common.sh@380 -- # return 0 00:25:03.123 21:21:25 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:25:03.123 21:21:25 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:25:03.123 21:21:25 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:25:03.123 21:21:25 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:25:03.123 21:21:25 -- common/autotest_common.sh@1672 -- # true 00:25:03.123 21:21:25 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:25:03.123 21:21:25 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:25:03.123 21:21:25 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:25:03.123 21:21:25 -- common/autotest_common.sh@27 -- # exec 00:25:03.123 21:21:25 -- common/autotest_common.sh@29 -- # exec 00:25:03.123 21:21:25 -- common/autotest_common.sh@31 -- # xtrace_restore 00:25:03.123 21:21:25 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:25:03.123 21:21:25 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:25:03.123 21:21:25 -- common/autotest_common.sh@18 -- # set -x 00:25:03.123 21:21:25 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:03.123 21:21:25 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:25:03.123 21:21:25 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:25:03.123 21:21:25 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:25:03.123 21:21:25 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:25:03.123 21:21:25 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:25:03.123 21:21:25 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:25:03.123 21:21:25 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:25:03.123 21:21:25 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:25:03.123 21:21:25 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.123 21:21:25 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:25:03.123 21:21:25 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=147123 00:25:03.123 21:21:25 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:03.123 21:21:25 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 147123 /var/tmp/spdk.sock 00:25:03.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.123 21:21:25 -- common/autotest_common.sh@819 -- # '[' -z 147123 ']' 00:25:03.123 21:21:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.123 21:21:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:03.123 21:21:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.123 21:21:25 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:25:03.123 21:21:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:03.123 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:25:03.382 [2024-06-07 21:21:25.801802] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:03.382 [2024-06-07 21:21:25.802058] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147123 ] 00:25:03.382 [2024-06-07 21:21:25.977152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:03.641 [2024-06-07 21:21:26.107012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.641 [2024-06-07 21:21:26.107173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.641 [2024-06-07 21:21:26.107188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.641 [2024-06-07 21:21:26.227496] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:04.208 21:21:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:04.208 21:21:26 -- common/autotest_common.sh@852 -- # return 0 00:25:04.208 21:21:26 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:25:04.208 21:21:26 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:25:04.208 21:21:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.208 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:25:04.208 21:21:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.208 21:21:26 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:25:04.208 "name": "app_thread", 00:25:04.208 "id": 1, 00:25:04.208 "active_pollers": [], 00:25:04.208 "timed_pollers": [ 00:25:04.208 { 00:25:04.208 "name": "rpc_subsystem_poll", 00:25:04.208 "id": 1, 00:25:04.208 "state": "waiting", 00:25:04.208 "run_count": 0, 00:25:04.208 "busy_count": 0, 00:25:04.208 "period_ticks": 8800000 00:25:04.208 } 00:25:04.208 ], 00:25:04.208 "paused_pollers": [] 00:25:04.208 }' 00:25:04.208 21:21:26 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:25:04.466 21:21:26 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:25:04.466 21:21:26 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:25:04.466 21:21:26 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:25:04.466 21:21:26 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:25:04.466 21:21:26 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:25:04.466 21:21:26 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:25:04.466 21:21:26 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:25:04.466 21:21:26 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:25:04.466 5000+0 records in 00:25:04.466 5000+0 records out 00:25:04.466 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0277349 s, 369 MB/s 00:25:04.466 21:21:27 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:25:04.724 AIO0 00:25:04.724 21:21:27 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:04.983 21:21:27 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:25:04.983 21:21:27 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:25:04.983 21:21:27 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:25:04.983 21:21:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.983 21:21:27 -- common/autotest_common.sh@10 -- # set +x 00:25:04.983 21:21:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.241 21:21:27 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:25:05.241 "name": "app_thread", 00:25:05.241 "id": 1, 00:25:05.241 "active_pollers": [], 00:25:05.241 "timed_pollers": [ 00:25:05.241 { 00:25:05.241 "name": "rpc_subsystem_poll", 00:25:05.241 "id": 1, 00:25:05.241 "state": "waiting", 00:25:05.241 "run_count": 0, 00:25:05.241 "busy_count": 0, 00:25:05.241 "period_ticks": 8800000 00:25:05.241 } 00:25:05.241 ], 00:25:05.241 "paused_pollers": [] 00:25:05.241 }' 00:25:05.241 21:21:27 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:25:05.241 21:21:27 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:25:05.241 21:21:27 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:25:05.241 21:21:27 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:25:05.241 21:21:27 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:25:05.241 21:21:27 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:25:05.241 21:21:27 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:25:05.241 21:21:27 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 147123 00:25:05.242 21:21:27 -- common/autotest_common.sh@926 -- # '[' -z 147123 ']' 00:25:05.242 21:21:27 -- common/autotest_common.sh@930 -- # kill -0 147123 00:25:05.242 21:21:27 -- common/autotest_common.sh@931 -- # uname 00:25:05.242 21:21:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:05.242 21:21:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 147123 00:25:05.242 21:21:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:05.242 killing process with pid 147123 00:25:05.242 21:21:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:05.242 21:21:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 147123' 00:25:05.242 21:21:27 -- common/autotest_common.sh@945 -- # kill 147123 00:25:05.242 21:21:27 -- common/autotest_common.sh@950 -- # wait 147123 00:25:05.807 21:21:28 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:25:05.807 21:21:28 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:25:05.807 00:25:05.807 real 0m2.594s 00:25:05.807 user 0m1.901s 00:25:05.807 sys 0m0.503s 00:25:05.807 ************************************ 00:25:05.807 END TEST reap_unregistered_poller 00:25:05.807 ************************************ 00:25:05.807 21:21:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.807 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:25:05.807 21:21:28 -- spdk/autotest.sh@204 -- # uname -s 00:25:05.807 21:21:28 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:25:05.807 21:21:28 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:25:05.807 21:21:28 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:25:05.807 21:21:28 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:25:05.807 21:21:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:05.807 21:21:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:05.807 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:25:05.807 ************************************ 00:25:05.807 START TEST spdk_dd 00:25:05.807 ************************************ 00:25:05.807 21:21:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:25:05.807 * Looking for test storage... 00:25:05.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:05.807 21:21:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:05.807 21:21:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.807 21:21:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.807 21:21:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.807 21:21:28 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:05.807 21:21:28 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:05.807 21:21:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:05.807 21:21:28 -- paths/export.sh@5 -- # export PATH 00:25:05.807 21:21:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:05.807 21:21:28 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:06.065 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:25:06.065 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:07.447 21:21:29 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:25:07.447 21:21:29 -- dd/dd.sh@11 -- # nvme_in_userspace 00:25:07.447 21:21:29 -- scripts/common.sh@311 -- # local bdf bdfs 00:25:07.447 21:21:29 -- scripts/common.sh@312 -- # local nvmes 00:25:07.447 21:21:29 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:25:07.447 21:21:29 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:25:07.447 21:21:29 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:25:07.447 21:21:29 -- scripts/common.sh@297 -- # local bdf= 00:25:07.447 21:21:29 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:25:07.447 21:21:29 -- scripts/common.sh@232 -- # local class 00:25:07.447 21:21:29 -- scripts/common.sh@233 -- # local subclass 00:25:07.447 21:21:29 -- scripts/common.sh@234 -- # local progif 00:25:07.447 21:21:29 -- scripts/common.sh@235 -- # printf %02x 1 00:25:07.447 21:21:29 -- scripts/common.sh@235 -- # class=01 00:25:07.447 21:21:29 -- scripts/common.sh@236 -- # printf %02x 8 00:25:07.447 21:21:29 -- scripts/common.sh@236 -- # subclass=08 00:25:07.447 21:21:29 -- scripts/common.sh@237 -- # printf %02x 2 00:25:07.447 21:21:29 -- scripts/common.sh@237 -- # progif=02 00:25:07.447 21:21:29 -- scripts/common.sh@239 -- # hash lspci 00:25:07.447 21:21:29 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:25:07.447 21:21:29 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:25:07.447 21:21:29 -- scripts/common.sh@242 -- # grep -i -- -p02 00:25:07.447 21:21:29 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:25:07.447 21:21:29 -- scripts/common.sh@244 -- # tr -d '"' 00:25:07.447 21:21:29 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:07.447 21:21:29 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:25:07.447 21:21:29 -- scripts/common.sh@15 -- # local i 00:25:07.447 21:21:29 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:25:07.447 21:21:29 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:25:07.447 21:21:29 -- scripts/common.sh@24 -- # return 0 00:25:07.447 21:21:29 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:25:07.447 21:21:29 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:25:07.447 21:21:29 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:25:07.447 21:21:29 -- scripts/common.sh@322 -- # uname -s 00:25:07.447 21:21:29 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:25:07.447 21:21:29 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:25:07.447 21:21:29 -- scripts/common.sh@327 -- # (( 1 )) 00:25:07.447 21:21:29 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:25:07.447 21:21:29 -- dd/dd.sh@13 -- # check_liburing 00:25:07.447 21:21:29 -- dd/common.sh@139 -- # local lib so 00:25:07.447 21:21:29 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:25:07.447 21:21:29 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libasan.so.5 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:25:07.447 21:21:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:07.447 21:21:29 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:25:07.447 21:21:29 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:25:07.447 21:21:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:07.447 21:21:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:07.447 21:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:07.447 ************************************ 00:25:07.447 START TEST spdk_dd_basic_rw 00:25:07.447 ************************************ 00:25:07.447 21:21:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:25:07.447 * Looking for test storage... 00:25:07.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:07.447 21:21:29 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:07.447 21:21:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.447 21:21:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.447 21:21:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.447 21:21:29 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:07.447 21:21:29 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:07.447 21:21:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:07.447 21:21:29 -- paths/export.sh@5 -- # export PATH 00:25:07.447 21:21:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:07.447 21:21:29 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:25:07.447 21:21:29 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:25:07.447 21:21:29 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:25:07.447 21:21:29 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:25:07.447 21:21:29 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:25:07.447 21:21:29 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:25:07.447 21:21:29 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:25:07.448 21:21:29 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:07.448 21:21:29 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:07.448 21:21:29 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:25:07.448 21:21:29 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:25:07.448 21:21:29 -- dd/common.sh@126 -- # mapfile -t id 00:25:07.448 21:21:29 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:25:07.448 21:21:30 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 113 Data Units Written: 7 Host Read Commands: 2338 Host Write Commands: 114 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:25:07.448 21:21:30 -- dd/common.sh@130 -- # lbaf=04 00:25:07.449 21:21:30 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 113 Data Units Written: 7 Host Read Commands: 2338 Host Write Commands: 114 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:25:07.449 21:21:30 -- dd/common.sh@132 -- # lbaf=4096 00:25:07.449 21:21:30 -- dd/common.sh@134 -- # echo 4096 00:25:07.449 21:21:30 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:25:07.449 21:21:30 -- dd/basic_rw.sh@96 -- # : 00:25:07.449 21:21:30 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:07.449 21:21:30 -- dd/basic_rw.sh@96 -- # gen_conf 00:25:07.449 21:21:30 -- dd/common.sh@31 -- # xtrace_disable 00:25:07.449 21:21:30 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:25:07.449 21:21:30 -- common/autotest_common.sh@10 -- # set +x 00:25:07.449 21:21:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:07.449 21:21:30 -- common/autotest_common.sh@10 -- # set +x 00:25:07.449 ************************************ 00:25:07.449 START TEST dd_bs_lt_native_bs 00:25:07.449 ************************************ 00:25:07.449 21:21:30 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:07.449 21:21:30 -- common/autotest_common.sh@640 -- # local es=0 00:25:07.449 21:21:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:07.449 21:21:30 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:07.449 21:21:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:07.449 21:21:30 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:07.449 21:21:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:07.449 21:21:30 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:07.707 21:21:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:07.707 21:21:30 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:07.707 21:21:30 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:07.707 21:21:30 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:07.707 { 00:25:07.707 "subsystems": [ 00:25:07.707 { 00:25:07.707 "subsystem": "bdev", 00:25:07.707 "config": [ 00:25:07.707 { 00:25:07.707 "params": { 00:25:07.707 "trtype": "pcie", 00:25:07.707 "traddr": "0000:00:06.0", 00:25:07.707 "name": "Nvme0" 00:25:07.707 }, 00:25:07.707 "method": "bdev_nvme_attach_controller" 00:25:07.707 }, 00:25:07.707 { 00:25:07.707 "method": "bdev_wait_for_examine" 00:25:07.707 } 00:25:07.707 ] 00:25:07.707 } 00:25:07.707 ] 00:25:07.707 } 00:25:07.707 [2024-06-07 21:21:30.187641] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:07.708 [2024-06-07 21:21:30.187905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147417 ] 00:25:07.708 [2024-06-07 21:21:30.357393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.966 [2024-06-07 21:21:30.446931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.225 [2024-06-07 21:21:30.642264] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:25:08.225 [2024-06-07 21:21:30.642406] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:08.225 [2024-06-07 21:21:30.843369] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:08.484 21:21:30 -- common/autotest_common.sh@643 -- # es=234 00:25:08.484 21:21:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:08.484 21:21:30 -- common/autotest_common.sh@652 -- # es=106 00:25:08.484 21:21:30 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:08.484 21:21:30 -- common/autotest_common.sh@660 -- # es=1 00:25:08.484 21:21:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:08.484 00:25:08.484 real 0m0.880s 00:25:08.484 user 0m0.570s 00:25:08.484 sys 0m0.278s 00:25:08.484 21:21:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.484 21:21:30 -- common/autotest_common.sh@10 -- # set +x 00:25:08.484 ************************************ 00:25:08.484 END TEST dd_bs_lt_native_bs 00:25:08.484 ************************************ 00:25:08.484 21:21:31 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:25:08.484 21:21:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:08.484 21:21:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:08.484 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:25:08.484 ************************************ 00:25:08.484 START TEST dd_rw 00:25:08.484 ************************************ 00:25:08.484 21:21:31 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:25:08.484 21:21:31 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:25:08.484 21:21:31 -- dd/basic_rw.sh@12 -- # local count size 00:25:08.484 21:21:31 -- dd/basic_rw.sh@13 -- # local qds bss 00:25:08.484 21:21:31 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:25:08.484 21:21:31 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:08.484 21:21:31 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:08.484 21:21:31 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:08.484 21:21:31 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:08.484 21:21:31 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:08.484 21:21:31 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:08.484 21:21:31 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:08.484 21:21:31 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:08.484 21:21:31 -- dd/basic_rw.sh@23 -- # count=15 00:25:08.484 21:21:31 -- dd/basic_rw.sh@24 -- # count=15 00:25:08.484 21:21:31 -- dd/basic_rw.sh@25 -- # size=61440 00:25:08.484 21:21:31 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:25:08.484 21:21:31 -- dd/common.sh@98 -- # xtrace_disable 00:25:08.484 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:25:09.051 21:21:31 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:25:09.052 21:21:31 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:09.052 21:21:31 -- dd/common.sh@31 -- # xtrace_disable 00:25:09.052 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:25:09.052 [2024-06-07 21:21:31.697120] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:09.052 [2024-06-07 21:21:31.698052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147465 ] 00:25:09.052 { 00:25:09.052 "subsystems": [ 00:25:09.052 { 00:25:09.052 "subsystem": "bdev", 00:25:09.052 "config": [ 00:25:09.052 { 00:25:09.052 "params": { 00:25:09.052 "trtype": "pcie", 00:25:09.052 "traddr": "0000:00:06.0", 00:25:09.052 "name": "Nvme0" 00:25:09.052 }, 00:25:09.052 "method": "bdev_nvme_attach_controller" 00:25:09.052 }, 00:25:09.052 { 00:25:09.052 "method": "bdev_wait_for_examine" 00:25:09.052 } 00:25:09.052 ] 00:25:09.052 } 00:25:09.052 ] 00:25:09.052 } 00:25:09.310 [2024-06-07 21:21:31.872812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.310 [2024-06-07 21:21:31.966852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.137  Copying: 60/60 [kB] (average 19 MBps) 00:25:10.137 00:25:10.137 21:21:32 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:10.137 21:21:32 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:25:10.137 21:21:32 -- dd/common.sh@31 -- # xtrace_disable 00:25:10.138 21:21:32 -- common/autotest_common.sh@10 -- # set +x 00:25:10.138 [2024-06-07 21:21:32.636523] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:10.138 [2024-06-07 21:21:32.636726] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147487 ] 00:25:10.138 { 00:25:10.138 "subsystems": [ 00:25:10.138 { 00:25:10.138 "subsystem": "bdev", 00:25:10.138 "config": [ 00:25:10.138 { 00:25:10.138 "params": { 00:25:10.138 "trtype": "pcie", 00:25:10.138 "traddr": "0000:00:06.0", 00:25:10.138 "name": "Nvme0" 00:25:10.138 }, 00:25:10.138 "method": "bdev_nvme_attach_controller" 00:25:10.138 }, 00:25:10.138 { 00:25:10.138 "method": "bdev_wait_for_examine" 00:25:10.138 } 00:25:10.138 ] 00:25:10.138 } 00:25:10.138 ] 00:25:10.138 } 00:25:10.138 [2024-06-07 21:21:32.794828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.396 [2024-06-07 21:21:32.919100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.912  Copying: 60/60 [kB] (average 19 MBps) 00:25:10.912 00:25:10.912 21:21:33 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:10.912 21:21:33 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:25:10.912 21:21:33 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:10.912 21:21:33 -- dd/common.sh@11 -- # local nvme_ref= 00:25:10.912 21:21:33 -- dd/common.sh@12 -- # local size=61440 00:25:10.912 21:21:33 -- dd/common.sh@14 -- # local bs=1048576 00:25:10.912 21:21:33 -- dd/common.sh@15 -- # local count=1 00:25:10.912 21:21:33 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:10.912 21:21:33 -- dd/common.sh@18 -- # gen_conf 00:25:10.912 21:21:33 -- dd/common.sh@31 -- # xtrace_disable 00:25:10.912 21:21:33 -- common/autotest_common.sh@10 -- # set +x 00:25:11.170 { 00:25:11.170 "subsystems": [ 00:25:11.170 { 00:25:11.170 "subsystem": "bdev", 00:25:11.170 "config": [ 00:25:11.170 { 00:25:11.170 "params": { 00:25:11.170 "trtype": "pcie", 00:25:11.170 "traddr": "0000:00:06.0", 00:25:11.170 "name": "Nvme0" 00:25:11.170 }, 00:25:11.171 "method": "bdev_nvme_attach_controller" 00:25:11.171 }, 00:25:11.171 { 00:25:11.171 "method": "bdev_wait_for_examine" 00:25:11.171 } 00:25:11.171 ] 00:25:11.171 } 00:25:11.171 ] 00:25:11.171 } 00:25:11.171 [2024-06-07 21:21:33.609084] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:11.171 [2024-06-07 21:21:33.609324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147508 ] 00:25:11.171 [2024-06-07 21:21:33.777178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.429 [2024-06-07 21:21:33.897102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.995  Copying: 1024/1024 [kB] (average 500 MBps) 00:25:11.995 00:25:11.995 21:21:34 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:11.995 21:21:34 -- dd/basic_rw.sh@23 -- # count=15 00:25:11.995 21:21:34 -- dd/basic_rw.sh@24 -- # count=15 00:25:11.995 21:21:34 -- dd/basic_rw.sh@25 -- # size=61440 00:25:11.995 21:21:34 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:25:11.995 21:21:34 -- dd/common.sh@98 -- # xtrace_disable 00:25:11.995 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:25:12.561 21:21:35 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:25:12.561 21:21:35 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:12.561 21:21:35 -- dd/common.sh@31 -- # xtrace_disable 00:25:12.561 21:21:35 -- common/autotest_common.sh@10 -- # set +x 00:25:12.561 [2024-06-07 21:21:35.222717] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:12.561 [2024-06-07 21:21:35.222913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147546 ] 00:25:12.820 { 00:25:12.820 "subsystems": [ 00:25:12.820 { 00:25:12.820 "subsystem": "bdev", 00:25:12.820 "config": [ 00:25:12.820 { 00:25:12.820 "params": { 00:25:12.820 "trtype": "pcie", 00:25:12.820 "traddr": "0000:00:06.0", 00:25:12.820 "name": "Nvme0" 00:25:12.820 }, 00:25:12.820 "method": "bdev_nvme_attach_controller" 00:25:12.820 }, 00:25:12.820 { 00:25:12.820 "method": "bdev_wait_for_examine" 00:25:12.820 } 00:25:12.820 ] 00:25:12.820 } 00:25:12.820 ] 00:25:12.820 } 00:25:12.820 [2024-06-07 21:21:35.379195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.078 [2024-06-07 21:21:35.500535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.645  Copying: 60/60 [kB] (average 58 MBps) 00:25:13.645 00:25:13.645 21:21:36 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:25:13.645 21:21:36 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:13.645 21:21:36 -- dd/common.sh@31 -- # xtrace_disable 00:25:13.645 21:21:36 -- common/autotest_common.sh@10 -- # set +x 00:25:13.645 [2024-06-07 21:21:36.200752] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:13.645 [2024-06-07 21:21:36.201592] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147566 ] 00:25:13.645 { 00:25:13.645 "subsystems": [ 00:25:13.645 { 00:25:13.645 "subsystem": "bdev", 00:25:13.645 "config": [ 00:25:13.645 { 00:25:13.645 "params": { 00:25:13.645 "trtype": "pcie", 00:25:13.645 "traddr": "0000:00:06.0", 00:25:13.645 "name": "Nvme0" 00:25:13.645 }, 00:25:13.645 "method": "bdev_nvme_attach_controller" 00:25:13.645 }, 00:25:13.645 { 00:25:13.645 "method": "bdev_wait_for_examine" 00:25:13.645 } 00:25:13.645 ] 00:25:13.645 } 00:25:13.645 ] 00:25:13.645 } 00:25:13.903 [2024-06-07 21:21:36.368332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.903 [2024-06-07 21:21:36.496764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.728  Copying: 60/60 [kB] (average 58 MBps) 00:25:14.728 00:25:14.728 21:21:37 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:14.728 21:21:37 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:25:14.728 21:21:37 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:14.728 21:21:37 -- dd/common.sh@11 -- # local nvme_ref= 00:25:14.728 21:21:37 -- dd/common.sh@12 -- # local size=61440 00:25:14.728 21:21:37 -- dd/common.sh@14 -- # local bs=1048576 00:25:14.728 21:21:37 -- dd/common.sh@15 -- # local count=1 00:25:14.728 21:21:37 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:14.728 21:21:37 -- dd/common.sh@18 -- # gen_conf 00:25:14.728 21:21:37 -- dd/common.sh@31 -- # xtrace_disable 00:25:14.728 21:21:37 -- common/autotest_common.sh@10 -- # set +x 00:25:14.728 [2024-06-07 21:21:37.192855] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:14.728 [2024-06-07 21:21:37.193198] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147587 ] 00:25:14.728 { 00:25:14.728 "subsystems": [ 00:25:14.728 { 00:25:14.728 "subsystem": "bdev", 00:25:14.728 "config": [ 00:25:14.728 { 00:25:14.728 "params": { 00:25:14.728 "trtype": "pcie", 00:25:14.728 "traddr": "0000:00:06.0", 00:25:14.728 "name": "Nvme0" 00:25:14.728 }, 00:25:14.728 "method": "bdev_nvme_attach_controller" 00:25:14.728 }, 00:25:14.728 { 00:25:14.728 "method": "bdev_wait_for_examine" 00:25:14.728 } 00:25:14.728 ] 00:25:14.728 } 00:25:14.728 ] 00:25:14.728 } 00:25:14.728 [2024-06-07 21:21:37.359998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.986 [2024-06-07 21:21:37.487115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.503  Copying: 1024/1024 [kB] (average 500 MBps) 00:25:15.503 00:25:15.503 21:21:38 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:15.503 21:21:38 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:15.503 21:21:38 -- dd/basic_rw.sh@23 -- # count=7 00:25:15.503 21:21:38 -- dd/basic_rw.sh@24 -- # count=7 00:25:15.503 21:21:38 -- dd/basic_rw.sh@25 -- # size=57344 00:25:15.503 21:21:38 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:25:15.503 21:21:38 -- dd/common.sh@98 -- # xtrace_disable 00:25:15.503 21:21:38 -- common/autotest_common.sh@10 -- # set +x 00:25:16.097 21:21:38 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:25:16.097 21:21:38 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:16.097 21:21:38 -- dd/common.sh@31 -- # xtrace_disable 00:25:16.097 21:21:38 -- common/autotest_common.sh@10 -- # set +x 00:25:16.097 [2024-06-07 21:21:38.689668] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:16.097 [2024-06-07 21:21:38.690597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147614 ] 00:25:16.097 { 00:25:16.097 "subsystems": [ 00:25:16.097 { 00:25:16.097 "subsystem": "bdev", 00:25:16.097 "config": [ 00:25:16.097 { 00:25:16.097 "params": { 00:25:16.097 "trtype": "pcie", 00:25:16.097 "traddr": "0000:00:06.0", 00:25:16.097 "name": "Nvme0" 00:25:16.097 }, 00:25:16.097 "method": "bdev_nvme_attach_controller" 00:25:16.097 }, 00:25:16.097 { 00:25:16.097 "method": "bdev_wait_for_examine" 00:25:16.097 } 00:25:16.097 ] 00:25:16.097 } 00:25:16.097 ] 00:25:16.097 } 00:25:16.355 [2024-06-07 21:21:38.864675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.355 [2024-06-07 21:21:38.977228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.180  Copying: 56/56 [kB] (average 54 MBps) 00:25:17.180 00:25:17.180 21:21:39 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:25:17.180 21:21:39 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:17.180 21:21:39 -- dd/common.sh@31 -- # xtrace_disable 00:25:17.180 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:25:17.180 [2024-06-07 21:21:39.615470] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:17.180 [2024-06-07 21:21:39.616393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147634 ] 00:25:17.180 { 00:25:17.180 "subsystems": [ 00:25:17.180 { 00:25:17.180 "subsystem": "bdev", 00:25:17.180 "config": [ 00:25:17.180 { 00:25:17.180 "params": { 00:25:17.180 "trtype": "pcie", 00:25:17.180 "traddr": "0000:00:06.0", 00:25:17.180 "name": "Nvme0" 00:25:17.180 }, 00:25:17.180 "method": "bdev_nvme_attach_controller" 00:25:17.180 }, 00:25:17.180 { 00:25:17.180 "method": "bdev_wait_for_examine" 00:25:17.180 } 00:25:17.180 ] 00:25:17.180 } 00:25:17.180 ] 00:25:17.180 } 00:25:17.180 [2024-06-07 21:21:39.788341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.437 [2024-06-07 21:21:39.879612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.002  Copying: 56/56 [kB] (average 27 MBps) 00:25:18.002 00:25:18.002 21:21:40 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:18.002 21:21:40 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:25:18.002 21:21:40 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:18.002 21:21:40 -- dd/common.sh@11 -- # local nvme_ref= 00:25:18.002 21:21:40 -- dd/common.sh@12 -- # local size=57344 00:25:18.002 21:21:40 -- dd/common.sh@14 -- # local bs=1048576 00:25:18.002 21:21:40 -- dd/common.sh@15 -- # local count=1 00:25:18.002 21:21:40 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:18.002 21:21:40 -- dd/common.sh@18 -- # gen_conf 00:25:18.002 21:21:40 -- dd/common.sh@31 -- # xtrace_disable 00:25:18.002 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:25:18.002 [2024-06-07 21:21:40.572513] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:18.002 [2024-06-07 21:21:40.572832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147654 ] 00:25:18.002 { 00:25:18.002 "subsystems": [ 00:25:18.002 { 00:25:18.002 "subsystem": "bdev", 00:25:18.002 "config": [ 00:25:18.002 { 00:25:18.002 "params": { 00:25:18.002 "trtype": "pcie", 00:25:18.002 "traddr": "0000:00:06.0", 00:25:18.002 "name": "Nvme0" 00:25:18.002 }, 00:25:18.002 "method": "bdev_nvme_attach_controller" 00:25:18.002 }, 00:25:18.002 { 00:25:18.002 "method": "bdev_wait_for_examine" 00:25:18.002 } 00:25:18.002 ] 00:25:18.002 } 00:25:18.002 ] 00:25:18.002 } 00:25:18.259 [2024-06-07 21:21:40.744008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.259 [2024-06-07 21:21:40.837956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.776  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:18.776 00:25:19.035 21:21:41 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:19.035 21:21:41 -- dd/basic_rw.sh@23 -- # count=7 00:25:19.035 21:21:41 -- dd/basic_rw.sh@24 -- # count=7 00:25:19.035 21:21:41 -- dd/basic_rw.sh@25 -- # size=57344 00:25:19.035 21:21:41 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:25:19.035 21:21:41 -- dd/common.sh@98 -- # xtrace_disable 00:25:19.035 21:21:41 -- common/autotest_common.sh@10 -- # set +x 00:25:19.603 21:21:42 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:25:19.603 21:21:42 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:19.603 21:21:42 -- dd/common.sh@31 -- # xtrace_disable 00:25:19.603 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:25:19.603 [2024-06-07 21:21:42.135152] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:19.603 [2024-06-07 21:21:42.136180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147676 ] 00:25:19.603 { 00:25:19.603 "subsystems": [ 00:25:19.603 { 00:25:19.603 "subsystem": "bdev", 00:25:19.603 "config": [ 00:25:19.603 { 00:25:19.603 "params": { 00:25:19.603 "trtype": "pcie", 00:25:19.603 "traddr": "0000:00:06.0", 00:25:19.603 "name": "Nvme0" 00:25:19.603 }, 00:25:19.603 "method": "bdev_nvme_attach_controller" 00:25:19.603 }, 00:25:19.603 { 00:25:19.603 "method": "bdev_wait_for_examine" 00:25:19.603 } 00:25:19.603 ] 00:25:19.603 } 00:25:19.603 ] 00:25:19.603 } 00:25:19.862 [2024-06-07 21:21:42.307492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.862 [2024-06-07 21:21:42.400657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.379  Copying: 56/56 [kB] (average 54 MBps) 00:25:20.379 00:25:20.379 21:21:42 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:25:20.379 21:21:42 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:20.379 21:21:42 -- dd/common.sh@31 -- # xtrace_disable 00:25:20.379 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:25:20.379 { 00:25:20.379 "subsystems": [ 00:25:20.379 { 00:25:20.379 "subsystem": "bdev", 00:25:20.379 "config": [ 00:25:20.379 { 00:25:20.379 "params": { 00:25:20.379 "trtype": "pcie", 00:25:20.379 "traddr": "0000:00:06.0", 00:25:20.379 "name": "Nvme0" 00:25:20.379 }, 00:25:20.379 "method": "bdev_nvme_attach_controller" 00:25:20.379 }, 00:25:20.379 { 00:25:20.379 "method": "bdev_wait_for_examine" 00:25:20.379 } 00:25:20.379 ] 00:25:20.379 } 00:25:20.379 ] 00:25:20.379 } 00:25:20.379 [2024-06-07 21:21:43.052412] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:20.379 [2024-06-07 21:21:43.052706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147695 ] 00:25:20.636 [2024-06-07 21:21:43.222165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.894 [2024-06-07 21:21:43.325366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.459  Copying: 56/56 [kB] (average 54 MBps) 00:25:21.459 00:25:21.459 21:21:43 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:21.459 21:21:43 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:25:21.459 21:21:43 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:21.459 21:21:43 -- dd/common.sh@11 -- # local nvme_ref= 00:25:21.459 21:21:43 -- dd/common.sh@12 -- # local size=57344 00:25:21.459 21:21:43 -- dd/common.sh@14 -- # local bs=1048576 00:25:21.459 21:21:43 -- dd/common.sh@15 -- # local count=1 00:25:21.459 21:21:43 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:21.459 21:21:43 -- dd/common.sh@18 -- # gen_conf 00:25:21.459 21:21:43 -- dd/common.sh@31 -- # xtrace_disable 00:25:21.459 21:21:43 -- common/autotest_common.sh@10 -- # set +x 00:25:21.459 [2024-06-07 21:21:44.001925] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:21.459 [2024-06-07 21:21:44.002165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147712 ] 00:25:21.459 { 00:25:21.459 "subsystems": [ 00:25:21.459 { 00:25:21.459 "subsystem": "bdev", 00:25:21.459 "config": [ 00:25:21.459 { 00:25:21.459 "params": { 00:25:21.459 "trtype": "pcie", 00:25:21.459 "traddr": "0000:00:06.0", 00:25:21.459 "name": "Nvme0" 00:25:21.459 }, 00:25:21.459 "method": "bdev_nvme_attach_controller" 00:25:21.459 }, 00:25:21.459 { 00:25:21.459 "method": "bdev_wait_for_examine" 00:25:21.459 } 00:25:21.459 ] 00:25:21.459 } 00:25:21.459 ] 00:25:21.459 } 00:25:21.717 [2024-06-07 21:21:44.172521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.717 [2024-06-07 21:21:44.289494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.233  Copying: 1024/1024 [kB] (average 500 MBps) 00:25:22.233 00:25:22.233 21:21:44 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:22.233 21:21:44 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:22.233 21:21:44 -- dd/basic_rw.sh@23 -- # count=3 00:25:22.233 21:21:44 -- dd/basic_rw.sh@24 -- # count=3 00:25:22.233 21:21:44 -- dd/basic_rw.sh@25 -- # size=49152 00:25:22.233 21:21:44 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:25:22.233 21:21:44 -- dd/common.sh@98 -- # xtrace_disable 00:25:22.233 21:21:44 -- common/autotest_common.sh@10 -- # set +x 00:25:22.803 21:21:45 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:25:22.803 21:21:45 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:22.804 21:21:45 -- dd/common.sh@31 -- # xtrace_disable 00:25:22.804 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:25:22.804 [2024-06-07 21:21:45.399857] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:22.804 [2024-06-07 21:21:45.400133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147757 ] 00:25:22.804 { 00:25:22.804 "subsystems": [ 00:25:22.804 { 00:25:22.804 "subsystem": "bdev", 00:25:22.804 "config": [ 00:25:22.804 { 00:25:22.804 "params": { 00:25:22.804 "trtype": "pcie", 00:25:22.804 "traddr": "0000:00:06.0", 00:25:22.804 "name": "Nvme0" 00:25:22.804 }, 00:25:22.804 "method": "bdev_nvme_attach_controller" 00:25:22.804 }, 00:25:22.804 { 00:25:22.804 "method": "bdev_wait_for_examine" 00:25:22.804 } 00:25:22.804 ] 00:25:22.804 } 00:25:22.804 ] 00:25:22.804 } 00:25:23.063 [2024-06-07 21:21:45.567747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.063 [2024-06-07 21:21:45.698435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.889  Copying: 48/48 [kB] (average 46 MBps) 00:25:23.889 00:25:23.889 21:21:46 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:25:23.889 21:21:46 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:23.889 21:21:46 -- dd/common.sh@31 -- # xtrace_disable 00:25:23.889 21:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:23.889 [2024-06-07 21:21:46.408448] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:23.889 [2024-06-07 21:21:46.409397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147773 ] 00:25:23.889 { 00:25:23.889 "subsystems": [ 00:25:23.889 { 00:25:23.889 "subsystem": "bdev", 00:25:23.889 "config": [ 00:25:23.889 { 00:25:23.889 "params": { 00:25:23.889 "trtype": "pcie", 00:25:23.889 "traddr": "0000:00:06.0", 00:25:23.889 "name": "Nvme0" 00:25:23.889 }, 00:25:23.889 "method": "bdev_nvme_attach_controller" 00:25:23.889 }, 00:25:23.889 { 00:25:23.889 "method": "bdev_wait_for_examine" 00:25:23.889 } 00:25:23.889 ] 00:25:23.889 } 00:25:23.889 ] 00:25:23.889 } 00:25:24.148 [2024-06-07 21:21:46.581356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.148 [2024-06-07 21:21:46.719145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.974  Copying: 48/48 [kB] (average 46 MBps) 00:25:24.974 00:25:24.974 21:21:47 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:24.974 21:21:47 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:25:24.974 21:21:47 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:24.974 21:21:47 -- dd/common.sh@11 -- # local nvme_ref= 00:25:24.974 21:21:47 -- dd/common.sh@12 -- # local size=49152 00:25:24.974 21:21:47 -- dd/common.sh@14 -- # local bs=1048576 00:25:24.974 21:21:47 -- dd/common.sh@15 -- # local count=1 00:25:24.974 21:21:47 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:24.974 21:21:47 -- dd/common.sh@18 -- # gen_conf 00:25:24.974 21:21:47 -- dd/common.sh@31 -- # xtrace_disable 00:25:24.974 21:21:47 -- common/autotest_common.sh@10 -- # set +x 00:25:24.974 [2024-06-07 21:21:47.434600] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:24.974 [2024-06-07 21:21:47.434840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147794 ] 00:25:24.974 { 00:25:24.974 "subsystems": [ 00:25:24.974 { 00:25:24.974 "subsystem": "bdev", 00:25:24.974 "config": [ 00:25:24.974 { 00:25:24.974 "params": { 00:25:24.974 "trtype": "pcie", 00:25:24.974 "traddr": "0000:00:06.0", 00:25:24.974 "name": "Nvme0" 00:25:24.974 }, 00:25:24.974 "method": "bdev_nvme_attach_controller" 00:25:24.974 }, 00:25:24.974 { 00:25:24.974 "method": "bdev_wait_for_examine" 00:25:24.974 } 00:25:24.974 ] 00:25:24.975 } 00:25:24.975 ] 00:25:24.975 } 00:25:24.975 [2024-06-07 21:21:47.603221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.234 [2024-06-07 21:21:47.738198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.753  Copying: 1024/1024 [kB] (average 500 MBps) 00:25:25.753 00:25:25.753 21:21:48 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:25.753 21:21:48 -- dd/basic_rw.sh@23 -- # count=3 00:25:25.753 21:21:48 -- dd/basic_rw.sh@24 -- # count=3 00:25:25.753 21:21:48 -- dd/basic_rw.sh@25 -- # size=49152 00:25:25.753 21:21:48 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:25:25.753 21:21:48 -- dd/common.sh@98 -- # xtrace_disable 00:25:25.753 21:21:48 -- common/autotest_common.sh@10 -- # set +x 00:25:26.321 21:21:48 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:25:26.321 21:21:48 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:26.321 21:21:48 -- dd/common.sh@31 -- # xtrace_disable 00:25:26.321 21:21:48 -- common/autotest_common.sh@10 -- # set +x 00:25:26.321 [2024-06-07 21:21:48.958431] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:26.321 [2024-06-07 21:21:48.958697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147821 ] 00:25:26.321 { 00:25:26.321 "subsystems": [ 00:25:26.321 { 00:25:26.321 "subsystem": "bdev", 00:25:26.321 "config": [ 00:25:26.321 { 00:25:26.321 "params": { 00:25:26.321 "trtype": "pcie", 00:25:26.321 "traddr": "0000:00:06.0", 00:25:26.321 "name": "Nvme0" 00:25:26.321 }, 00:25:26.321 "method": "bdev_nvme_attach_controller" 00:25:26.321 }, 00:25:26.321 { 00:25:26.321 "method": "bdev_wait_for_examine" 00:25:26.321 } 00:25:26.321 ] 00:25:26.321 } 00:25:26.321 ] 00:25:26.321 } 00:25:26.581 [2024-06-07 21:21:49.132504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.581 [2024-06-07 21:21:49.242986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.408  Copying: 48/48 [kB] (average 46 MBps) 00:25:27.408 00:25:27.408 21:21:49 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:25:27.408 21:21:49 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:27.408 21:21:49 -- dd/common.sh@31 -- # xtrace_disable 00:25:27.408 21:21:49 -- common/autotest_common.sh@10 -- # set +x 00:25:27.408 { 00:25:27.408 "subsystems": [ 00:25:27.408 { 00:25:27.408 "subsystem": "bdev", 00:25:27.408 "config": [ 00:25:27.408 { 00:25:27.408 "params": { 00:25:27.408 "trtype": "pcie", 00:25:27.408 "traddr": "0000:00:06.0", 00:25:27.408 "name": "Nvme0" 00:25:27.408 }, 00:25:27.408 "method": "bdev_nvme_attach_controller" 00:25:27.408 }, 00:25:27.408 { 00:25:27.408 "method": "bdev_wait_for_examine" 00:25:27.408 } 00:25:27.408 ] 00:25:27.408 } 00:25:27.408 ] 00:25:27.408 } 00:25:27.408 [2024-06-07 21:21:49.939615] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:27.408 [2024-06-07 21:21:49.939879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147841 ] 00:25:27.667 [2024-06-07 21:21:50.114762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.667 [2024-06-07 21:21:50.221010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.185  Copying: 48/48 [kB] (average 46 MBps) 00:25:28.185 00:25:28.185 21:21:50 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:28.185 21:21:50 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:25:28.185 21:21:50 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:28.185 21:21:50 -- dd/common.sh@11 -- # local nvme_ref= 00:25:28.185 21:21:50 -- dd/common.sh@12 -- # local size=49152 00:25:28.185 21:21:50 -- dd/common.sh@14 -- # local bs=1048576 00:25:28.185 21:21:50 -- dd/common.sh@15 -- # local count=1 00:25:28.185 21:21:50 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:28.185 21:21:50 -- dd/common.sh@18 -- # gen_conf 00:25:28.185 21:21:50 -- dd/common.sh@31 -- # xtrace_disable 00:25:28.185 21:21:50 -- common/autotest_common.sh@10 -- # set +x 00:25:28.444 [2024-06-07 21:21:50.913570] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:28.444 [2024-06-07 21:21:50.913834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147862 ] 00:25:28.444 { 00:25:28.444 "subsystems": [ 00:25:28.444 { 00:25:28.444 "subsystem": "bdev", 00:25:28.444 "config": [ 00:25:28.444 { 00:25:28.444 "params": { 00:25:28.444 "trtype": "pcie", 00:25:28.444 "traddr": "0000:00:06.0", 00:25:28.444 "name": "Nvme0" 00:25:28.444 }, 00:25:28.444 "method": "bdev_nvme_attach_controller" 00:25:28.444 }, 00:25:28.444 { 00:25:28.444 "method": "bdev_wait_for_examine" 00:25:28.444 } 00:25:28.444 ] 00:25:28.444 } 00:25:28.444 ] 00:25:28.444 } 00:25:28.444 [2024-06-07 21:21:51.084649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.702 [2024-06-07 21:21:51.198901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.221  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:29.221 00:25:29.221 ************************************ 00:25:29.221 END TEST dd_rw 00:25:29.221 ************************************ 00:25:29.221 00:25:29.221 real 0m20.792s 00:25:29.221 user 0m14.447s 00:25:29.221 sys 0m5.124s 00:25:29.221 21:21:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.221 21:21:51 -- common/autotest_common.sh@10 -- # set +x 00:25:29.221 21:21:51 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:25:29.221 21:21:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:29.221 21:21:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:29.221 21:21:51 -- common/autotest_common.sh@10 -- # set +x 00:25:29.480 ************************************ 00:25:29.480 START TEST dd_rw_offset 00:25:29.480 ************************************ 00:25:29.480 21:21:51 -- common/autotest_common.sh@1104 -- # basic_offset 00:25:29.480 21:21:51 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:25:29.480 21:21:51 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:25:29.480 21:21:51 -- dd/common.sh@98 -- # xtrace_disable 00:25:29.480 21:21:51 -- common/autotest_common.sh@10 -- # set +x 00:25:29.480 21:21:51 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:25:29.481 21:21:51 -- dd/basic_rw.sh@56 -- # data=xhjqtpritawb6xt0w4mbkr8xa6g6vekkh2ujxzln1rldh2ilxxee9wdxpwtlpxdo0y4kiahrg9vytehsfabjlo9f63ydnki6yvmmwqrofrmyfw9eqjnoovnv4o808qtkflryqhqr1vek3lz5nrq1fxzb5u9clf8jcu6n00xd68925fteug32uhkb3djwy5gfrbpkf8sx20l200jb6fmfiy1n0a7r74l1ghzwq9n44md2kof8zsgbk59f6pw1obeur3prrjch6wlze80db96t5gki78ou9tahcxj0oay92y76osoln12nie85czqg5mnjs36t1xo4xtudujs8kpg4drm5huaqokxq2s45x19c9trulxzo6wzopf4im5u5w0llejzbxv4mfqrfy6q60hcyzrwwerjryxz6agprp3oanw8che5s8yefze4o5hnb4ix63vp6wf1u9vmopgcrlj1e5fpnkhvdfywgyui9ehte09eh828cy5j1hht9g6vm87xn62610sgt2op7zntmhl1c5my4gnc5d9gygqppjsh7v3sx68lqqgqrwjorronynypdu6ravqmghonlub4cnks81vpcu8kpw1m437y4pic9px8t3f0k480swdukv8i5qy8q1eyei7c0d8mon6rtq0olj8sbnj5q5rgqc02ryq9q2i13db631trifydjry21q00mbw9r5z2q6tn2sv780fbovxvasbyb2m4br2s182ojs3aild6vxpx188zuupe2bjx9z2nzvren9i4d81eazifhefaztbs387x8smh2lwqb95kulf6owv03bbyfbjfsihkzg4s6iplrlheu0ofretzvvjj6i2r1caw0p5ypazi1oli1h0zviqipy5f22v6ktimre92hy5euc5kzx4kfdz96qg5yn0ztdjwt0v0wam1ub2azqmzsq0nk98zz87y0byfu7sjmfhe5cifr82h008apdd2p4y0nu4zu45k2x3hseqdibxaf5xh1oajc41w8dmsspy0aulkmgvy92ear1tj1ou04bu6rwdrwe0ppmbr0u93yqe9s8s62jfe3w024615zggo7qbvyrnw47tjo8dll5yd52x1iw36cjok9e8ds411y7cwrakxvqrdsu8cjxr5obf0ww6sbdvgx216l22r8adlyu0mwjm1ocwccva504phxwx8nxax3uk6fg7amfidofpi8qmaht5maizswsa723wa1utiaiqbkss8901cu2wu1ypak4zi3i18694f2eiiev7mg0lrp39ktjgdp4f9uzuweuebdih1no24hsdw7lskphnl9q8q2wgmslb9bwfr8z6hxrycm7h64fe6kwb11g4ezvwz54q8h163fqvnpfcj3mu2c35kytqazcpvdfyn4vg5e63qus5em25uaz46kvk82937gfxbi5ia16fxw1opdxlkty310vydca441v5h9p06zmqto1o60pr8ihc33v8crzl5d6ot1d5honupzfm7hmvtp9fhliomyi4hp52squ2rgs2zykl8830gn4ylp873cr0qj697sxlhvt1q209i7o3vu0y98pf25ufjqardlx41eldfaa6cg3wys54n9v58bvqdwhpbahr1y8ob47qyuyri5cjygxlskzuq2b5qclsy67bkrt2bkxawsoyt1zoai9zp90jp7imwej9fxh1wuz10wug33vqafaqt4mhwhohxoasrrtdowts37fakeu5o8ho7b6jpdsk3kfrx5rhuy95bfem5cme26yqz4rb6cmusiw0x5933qifvo1tc3paf62ad9k52lkv3s3ykgn90fxpue763m8etztu56kpkut2t0t0wajgcoa9cgutf6wyrgzn85vzrjqcl2hp7zi004ybnhczk1islnd0u7tp6433b3zv7ucpbmq5f7u6tnnj7xwbcp4on6zx139tde9kmy6hj10uxl59adhexm2u20ib5n0xtzied95rak61vt90zq741024bf6gb3rcqid1pzme8o5scbg7lqulkwr9p8tg91xwo857ol2tohrvri4ot43z5ma303lbitfdd7ehrotve4ideplep1zriw9pm3oru8msa1zzkrhxfd7dkqz0yrdgdcom3vlfj1uqno640cwzejcijzl2xo940qaudz73oz4l4oruwa97l8d9ca7pm2sx5dvn75geecm8yxofvcctw5utznq44jcvmoil7baqm5pv8nlm4fi75p7pkniybz9yeqx17q35ywsoqawakb7t852l4cjau1d5yvvv9kc1bzfs9bji2z2erp4n8fjfvf2dzzymu7gu5xm3t7svwwew78w1c38ky5a37npbq8p5j27q4cuapbsc9r7r6uiwdsoli9xh00p3fs14h9oa4g1kpig1upjq897o42a7i8smts8qwq0l617yax469pc90kz985qu2w3vor85xm0yxk6u7gm76ysbl19diw0t23g3gmj8v58iaixhny55zeiufzb2i9bowfwm2gacalp7prxhq4mccdz2u0lcixhhly44meyq0v1qkb97e2f8xnupy1w1trvkrj6ostvsqkazvkcw2yhox89xc7bhucrwy6cqansu1hgqi3x2hh6c57bj66h345hlr36eigxa0px1iqy7jcdzgwkofmrey0wg9i2iuklmcxe3n1ev4ykqikdmi8hgjsqv7w57gowhhndf7kki42g85eswxek533d2i0edrqa6zmustzadt3p6a936aoj2c9245j99a1pagha4682f8y02ufg0j6i9nxjhwugg45axnx10yfl3383f8100tqiil3cmfpzayb9f85xi8815pvmtfbpm9kqeza2ajalanwnrrefmr7a1v2szhc23w44c8cxezn3lw3gupwmsnlwosop00idyjuwffvxv11vqj6qctdi9ffzaucipum56bi0xnddg087a2cax84e5i8mfwz7p8rgb10qyu7hc9smfhh7a5u3i3es4nryymd0ivmorr6si11su5bi1bni0v9whxsqy8q7vczvnj56ermkw8497r8sfp06mtie3ihsv9i56lhj2ihj628l52wr9h8h8szpeqt4zi6x9x2mhkqcwwtr6vchoqfzcv26whobmq5q63qc9bx09oey77lcmnn7sfrahtb7lpalq26wfw1lej3igndigd9ucf4abxxy8om055gvkps5ytswk70bn77upuf07rx5myce9itmrloczhfmkg1d5x5a8ngbgmibzfesixvjpvvhjfuk7o5zyfy5lik6091eb10hly6enrpg0w2xn4rweooej7r8khaie73ex1qydgpa4mdbag2yjxdde1852o4gcju6pzwyuz3mm9h9bhcknzp7yukz93cvl048omdqvly0s8v599psls2wc7lkrchdqgz4p1bi5u939js70k2jpar6tvl97x4nvjtg8zdahg47qs32nwk711rld4idh04hg4yiyuhdwysgjsnnuv4oyelxooebp5u8qbc4xk8wc5wszpwb24h9uu7rqzthtujvi2zgkoq427iz8fsbk10k7v0nibe452y5dtonmrfhj9sphmqsy32ec8vtbfs0y2b611n4bxt5ovgnawzxer6j575gdlz4ygk70zfkszquro9ukql8esm2vgq6prb04vyuexgi6qjxj3vf95yl2v4i7r0f2a50a9wfsu4r6vok3zeee6beb32eqqd0igs4en12vxk2y29429ri967hlkj7xfmm4w3rxh4bil61u5end5wlkcznf8tyoav64xh393ab6s9ryizxhklj2gjzl62mwlfrsxj8l8chiulkbukihgw2db4589084biwvwjhvrv8qiiyyzx8518ucqbvnf63werx8bkump89bao7r0a4pxodclj1ttuqb8n2zzsb8bgl5uuuucgdjcmjzz1th2550aljcn6vdqf7y6rj7k9760oxbix2fun28n61tqayicjwtda571punbgfph54sjqbpixbmel80q905p9lyf24my15tak4jmamfbhv6wddco0f014yttqi936 00:25:29.481 21:21:51 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:25:29.481 21:21:51 -- dd/basic_rw.sh@59 -- # gen_conf 00:25:29.481 21:21:51 -- dd/common.sh@31 -- # xtrace_disable 00:25:29.481 21:21:51 -- common/autotest_common.sh@10 -- # set +x 00:25:29.481 [2024-06-07 21:21:52.010437] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:29.481 [2024-06-07 21:21:52.010662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147902 ] 00:25:29.481 { 00:25:29.481 "subsystems": [ 00:25:29.481 { 00:25:29.481 "subsystem": "bdev", 00:25:29.481 "config": [ 00:25:29.481 { 00:25:29.481 "params": { 00:25:29.481 "trtype": "pcie", 00:25:29.481 "traddr": "0000:00:06.0", 00:25:29.481 "name": "Nvme0" 00:25:29.481 }, 00:25:29.481 "method": "bdev_nvme_attach_controller" 00:25:29.481 }, 00:25:29.481 { 00:25:29.481 "method": "bdev_wait_for_examine" 00:25:29.481 } 00:25:29.481 ] 00:25:29.481 } 00:25:29.481 ] 00:25:29.481 } 00:25:29.740 [2024-06-07 21:21:52.176642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.740 [2024-06-07 21:21:52.263660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.258  Copying: 4096/4096 [B] (average 4000 kBps) 00:25:30.258 00:25:30.258 21:21:52 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:25:30.258 21:21:52 -- dd/basic_rw.sh@65 -- # gen_conf 00:25:30.258 21:21:52 -- dd/common.sh@31 -- # xtrace_disable 00:25:30.258 21:21:52 -- common/autotest_common.sh@10 -- # set +x 00:25:30.516 [2024-06-07 21:21:52.962008] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:30.516 [2024-06-07 21:21:52.962305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147925 ] 00:25:30.516 { 00:25:30.516 "subsystems": [ 00:25:30.516 { 00:25:30.516 "subsystem": "bdev", 00:25:30.516 "config": [ 00:25:30.516 { 00:25:30.516 "params": { 00:25:30.516 "trtype": "pcie", 00:25:30.516 "traddr": "0000:00:06.0", 00:25:30.516 "name": "Nvme0" 00:25:30.516 }, 00:25:30.516 "method": "bdev_nvme_attach_controller" 00:25:30.516 }, 00:25:30.516 { 00:25:30.516 "method": "bdev_wait_for_examine" 00:25:30.516 } 00:25:30.516 ] 00:25:30.516 } 00:25:30.516 ] 00:25:30.516 } 00:25:30.517 [2024-06-07 21:21:53.131530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.775 [2024-06-07 21:21:53.236755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.343  Copying: 4096/4096 [B] (average 4000 kBps) 00:25:31.343 00:25:31.343 21:21:53 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:25:31.343 ************************************ 00:25:31.343 END TEST dd_rw_offset 00:25:31.343 ************************************ 00:25:31.344 21:21:53 -- dd/basic_rw.sh@72 -- # [[ xhjqtpritawb6xt0w4mbkr8xa6g6vekkh2ujxzln1rldh2ilxxee9wdxpwtlpxdo0y4kiahrg9vytehsfabjlo9f63ydnki6yvmmwqrofrmyfw9eqjnoovnv4o808qtkflryqhqr1vek3lz5nrq1fxzb5u9clf8jcu6n00xd68925fteug32uhkb3djwy5gfrbpkf8sx20l200jb6fmfiy1n0a7r74l1ghzwq9n44md2kof8zsgbk59f6pw1obeur3prrjch6wlze80db96t5gki78ou9tahcxj0oay92y76osoln12nie85czqg5mnjs36t1xo4xtudujs8kpg4drm5huaqokxq2s45x19c9trulxzo6wzopf4im5u5w0llejzbxv4mfqrfy6q60hcyzrwwerjryxz6agprp3oanw8che5s8yefze4o5hnb4ix63vp6wf1u9vmopgcrlj1e5fpnkhvdfywgyui9ehte09eh828cy5j1hht9g6vm87xn62610sgt2op7zntmhl1c5my4gnc5d9gygqppjsh7v3sx68lqqgqrwjorronynypdu6ravqmghonlub4cnks81vpcu8kpw1m437y4pic9px8t3f0k480swdukv8i5qy8q1eyei7c0d8mon6rtq0olj8sbnj5q5rgqc02ryq9q2i13db631trifydjry21q00mbw9r5z2q6tn2sv780fbovxvasbyb2m4br2s182ojs3aild6vxpx188zuupe2bjx9z2nzvren9i4d81eazifhefaztbs387x8smh2lwqb95kulf6owv03bbyfbjfsihkzg4s6iplrlheu0ofretzvvjj6i2r1caw0p5ypazi1oli1h0zviqipy5f22v6ktimre92hy5euc5kzx4kfdz96qg5yn0ztdjwt0v0wam1ub2azqmzsq0nk98zz87y0byfu7sjmfhe5cifr82h008apdd2p4y0nu4zu45k2x3hseqdibxaf5xh1oajc41w8dmsspy0aulkmgvy92ear1tj1ou04bu6rwdrwe0ppmbr0u93yqe9s8s62jfe3w024615zggo7qbvyrnw47tjo8dll5yd52x1iw36cjok9e8ds411y7cwrakxvqrdsu8cjxr5obf0ww6sbdvgx216l22r8adlyu0mwjm1ocwccva504phxwx8nxax3uk6fg7amfidofpi8qmaht5maizswsa723wa1utiaiqbkss8901cu2wu1ypak4zi3i18694f2eiiev7mg0lrp39ktjgdp4f9uzuweuebdih1no24hsdw7lskphnl9q8q2wgmslb9bwfr8z6hxrycm7h64fe6kwb11g4ezvwz54q8h163fqvnpfcj3mu2c35kytqazcpvdfyn4vg5e63qus5em25uaz46kvk82937gfxbi5ia16fxw1opdxlkty310vydca441v5h9p06zmqto1o60pr8ihc33v8crzl5d6ot1d5honupzfm7hmvtp9fhliomyi4hp52squ2rgs2zykl8830gn4ylp873cr0qj697sxlhvt1q209i7o3vu0y98pf25ufjqardlx41eldfaa6cg3wys54n9v58bvqdwhpbahr1y8ob47qyuyri5cjygxlskzuq2b5qclsy67bkrt2bkxawsoyt1zoai9zp90jp7imwej9fxh1wuz10wug33vqafaqt4mhwhohxoasrrtdowts37fakeu5o8ho7b6jpdsk3kfrx5rhuy95bfem5cme26yqz4rb6cmusiw0x5933qifvo1tc3paf62ad9k52lkv3s3ykgn90fxpue763m8etztu56kpkut2t0t0wajgcoa9cgutf6wyrgzn85vzrjqcl2hp7zi004ybnhczk1islnd0u7tp6433b3zv7ucpbmq5f7u6tnnj7xwbcp4on6zx139tde9kmy6hj10uxl59adhexm2u20ib5n0xtzied95rak61vt90zq741024bf6gb3rcqid1pzme8o5scbg7lqulkwr9p8tg91xwo857ol2tohrvri4ot43z5ma303lbitfdd7ehrotve4ideplep1zriw9pm3oru8msa1zzkrhxfd7dkqz0yrdgdcom3vlfj1uqno640cwzejcijzl2xo940qaudz73oz4l4oruwa97l8d9ca7pm2sx5dvn75geecm8yxofvcctw5utznq44jcvmoil7baqm5pv8nlm4fi75p7pkniybz9yeqx17q35ywsoqawakb7t852l4cjau1d5yvvv9kc1bzfs9bji2z2erp4n8fjfvf2dzzymu7gu5xm3t7svwwew78w1c38ky5a37npbq8p5j27q4cuapbsc9r7r6uiwdsoli9xh00p3fs14h9oa4g1kpig1upjq897o42a7i8smts8qwq0l617yax469pc90kz985qu2w3vor85xm0yxk6u7gm76ysbl19diw0t23g3gmj8v58iaixhny55zeiufzb2i9bowfwm2gacalp7prxhq4mccdz2u0lcixhhly44meyq0v1qkb97e2f8xnupy1w1trvkrj6ostvsqkazvkcw2yhox89xc7bhucrwy6cqansu1hgqi3x2hh6c57bj66h345hlr36eigxa0px1iqy7jcdzgwkofmrey0wg9i2iuklmcxe3n1ev4ykqikdmi8hgjsqv7w57gowhhndf7kki42g85eswxek533d2i0edrqa6zmustzadt3p6a936aoj2c9245j99a1pagha4682f8y02ufg0j6i9nxjhwugg45axnx10yfl3383f8100tqiil3cmfpzayb9f85xi8815pvmtfbpm9kqeza2ajalanwnrrefmr7a1v2szhc23w44c8cxezn3lw3gupwmsnlwosop00idyjuwffvxv11vqj6qctdi9ffzaucipum56bi0xnddg087a2cax84e5i8mfwz7p8rgb10qyu7hc9smfhh7a5u3i3es4nryymd0ivmorr6si11su5bi1bni0v9whxsqy8q7vczvnj56ermkw8497r8sfp06mtie3ihsv9i56lhj2ihj628l52wr9h8h8szpeqt4zi6x9x2mhkqcwwtr6vchoqfzcv26whobmq5q63qc9bx09oey77lcmnn7sfrahtb7lpalq26wfw1lej3igndigd9ucf4abxxy8om055gvkps5ytswk70bn77upuf07rx5myce9itmrloczhfmkg1d5x5a8ngbgmibzfesixvjpvvhjfuk7o5zyfy5lik6091eb10hly6enrpg0w2xn4rweooej7r8khaie73ex1qydgpa4mdbag2yjxdde1852o4gcju6pzwyuz3mm9h9bhcknzp7yukz93cvl048omdqvly0s8v599psls2wc7lkrchdqgz4p1bi5u939js70k2jpar6tvl97x4nvjtg8zdahg47qs32nwk711rld4idh04hg4yiyuhdwysgjsnnuv4oyelxooebp5u8qbc4xk8wc5wszpwb24h9uu7rqzthtujvi2zgkoq427iz8fsbk10k7v0nibe452y5dtonmrfhj9sphmqsy32ec8vtbfs0y2b611n4bxt5ovgnawzxer6j575gdlz4ygk70zfkszquro9ukql8esm2vgq6prb04vyuexgi6qjxj3vf95yl2v4i7r0f2a50a9wfsu4r6vok3zeee6beb32eqqd0igs4en12vxk2y29429ri967hlkj7xfmm4w3rxh4bil61u5end5wlkcznf8tyoav64xh393ab6s9ryizxhklj2gjzl62mwlfrsxj8l8chiulkbukihgw2db4589084biwvwjhvrv8qiiyyzx8518ucqbvnf63werx8bkump89bao7r0a4pxodclj1ttuqb8n2zzsb8bgl5uuuucgdjcmjzz1th2550aljcn6vdqf7y6rj7k9760oxbix2fun28n61tqayicjwtda571punbgfph54sjqbpixbmel80q905p9lyf24my15tak4jmamfbhv6wddco0f014yttqi936 == \x\h\j\q\t\p\r\i\t\a\w\b\6\x\t\0\w\4\m\b\k\r\8\x\a\6\g\6\v\e\k\k\h\2\u\j\x\z\l\n\1\r\l\d\h\2\i\l\x\x\e\e\9\w\d\x\p\w\t\l\p\x\d\o\0\y\4\k\i\a\h\r\g\9\v\y\t\e\h\s\f\a\b\j\l\o\9\f\6\3\y\d\n\k\i\6\y\v\m\m\w\q\r\o\f\r\m\y\f\w\9\e\q\j\n\o\o\v\n\v\4\o\8\0\8\q\t\k\f\l\r\y\q\h\q\r\1\v\e\k\3\l\z\5\n\r\q\1\f\x\z\b\5\u\9\c\l\f\8\j\c\u\6\n\0\0\x\d\6\8\9\2\5\f\t\e\u\g\3\2\u\h\k\b\3\d\j\w\y\5\g\f\r\b\p\k\f\8\s\x\2\0\l\2\0\0\j\b\6\f\m\f\i\y\1\n\0\a\7\r\7\4\l\1\g\h\z\w\q\9\n\4\4\m\d\2\k\o\f\8\z\s\g\b\k\5\9\f\6\p\w\1\o\b\e\u\r\3\p\r\r\j\c\h\6\w\l\z\e\8\0\d\b\9\6\t\5\g\k\i\7\8\o\u\9\t\a\h\c\x\j\0\o\a\y\9\2\y\7\6\o\s\o\l\n\1\2\n\i\e\8\5\c\z\q\g\5\m\n\j\s\3\6\t\1\x\o\4\x\t\u\d\u\j\s\8\k\p\g\4\d\r\m\5\h\u\a\q\o\k\x\q\2\s\4\5\x\1\9\c\9\t\r\u\l\x\z\o\6\w\z\o\p\f\4\i\m\5\u\5\w\0\l\l\e\j\z\b\x\v\4\m\f\q\r\f\y\6\q\6\0\h\c\y\z\r\w\w\e\r\j\r\y\x\z\6\a\g\p\r\p\3\o\a\n\w\8\c\h\e\5\s\8\y\e\f\z\e\4\o\5\h\n\b\4\i\x\6\3\v\p\6\w\f\1\u\9\v\m\o\p\g\c\r\l\j\1\e\5\f\p\n\k\h\v\d\f\y\w\g\y\u\i\9\e\h\t\e\0\9\e\h\8\2\8\c\y\5\j\1\h\h\t\9\g\6\v\m\8\7\x\n\6\2\6\1\0\s\g\t\2\o\p\7\z\n\t\m\h\l\1\c\5\m\y\4\g\n\c\5\d\9\g\y\g\q\p\p\j\s\h\7\v\3\s\x\6\8\l\q\q\g\q\r\w\j\o\r\r\o\n\y\n\y\p\d\u\6\r\a\v\q\m\g\h\o\n\l\u\b\4\c\n\k\s\8\1\v\p\c\u\8\k\p\w\1\m\4\3\7\y\4\p\i\c\9\p\x\8\t\3\f\0\k\4\8\0\s\w\d\u\k\v\8\i\5\q\y\8\q\1\e\y\e\i\7\c\0\d\8\m\o\n\6\r\t\q\0\o\l\j\8\s\b\n\j\5\q\5\r\g\q\c\0\2\r\y\q\9\q\2\i\1\3\d\b\6\3\1\t\r\i\f\y\d\j\r\y\2\1\q\0\0\m\b\w\9\r\5\z\2\q\6\t\n\2\s\v\7\8\0\f\b\o\v\x\v\a\s\b\y\b\2\m\4\b\r\2\s\1\8\2\o\j\s\3\a\i\l\d\6\v\x\p\x\1\8\8\z\u\u\p\e\2\b\j\x\9\z\2\n\z\v\r\e\n\9\i\4\d\8\1\e\a\z\i\f\h\e\f\a\z\t\b\s\3\8\7\x\8\s\m\h\2\l\w\q\b\9\5\k\u\l\f\6\o\w\v\0\3\b\b\y\f\b\j\f\s\i\h\k\z\g\4\s\6\i\p\l\r\l\h\e\u\0\o\f\r\e\t\z\v\v\j\j\6\i\2\r\1\c\a\w\0\p\5\y\p\a\z\i\1\o\l\i\1\h\0\z\v\i\q\i\p\y\5\f\2\2\v\6\k\t\i\m\r\e\9\2\h\y\5\e\u\c\5\k\z\x\4\k\f\d\z\9\6\q\g\5\y\n\0\z\t\d\j\w\t\0\v\0\w\a\m\1\u\b\2\a\z\q\m\z\s\q\0\n\k\9\8\z\z\8\7\y\0\b\y\f\u\7\s\j\m\f\h\e\5\c\i\f\r\8\2\h\0\0\8\a\p\d\d\2\p\4\y\0\n\u\4\z\u\4\5\k\2\x\3\h\s\e\q\d\i\b\x\a\f\5\x\h\1\o\a\j\c\4\1\w\8\d\m\s\s\p\y\0\a\u\l\k\m\g\v\y\9\2\e\a\r\1\t\j\1\o\u\0\4\b\u\6\r\w\d\r\w\e\0\p\p\m\b\r\0\u\9\3\y\q\e\9\s\8\s\6\2\j\f\e\3\w\0\2\4\6\1\5\z\g\g\o\7\q\b\v\y\r\n\w\4\7\t\j\o\8\d\l\l\5\y\d\5\2\x\1\i\w\3\6\c\j\o\k\9\e\8\d\s\4\1\1\y\7\c\w\r\a\k\x\v\q\r\d\s\u\8\c\j\x\r\5\o\b\f\0\w\w\6\s\b\d\v\g\x\2\1\6\l\2\2\r\8\a\d\l\y\u\0\m\w\j\m\1\o\c\w\c\c\v\a\5\0\4\p\h\x\w\x\8\n\x\a\x\3\u\k\6\f\g\7\a\m\f\i\d\o\f\p\i\8\q\m\a\h\t\5\m\a\i\z\s\w\s\a\7\2\3\w\a\1\u\t\i\a\i\q\b\k\s\s\8\9\0\1\c\u\2\w\u\1\y\p\a\k\4\z\i\3\i\1\8\6\9\4\f\2\e\i\i\e\v\7\m\g\0\l\r\p\3\9\k\t\j\g\d\p\4\f\9\u\z\u\w\e\u\e\b\d\i\h\1\n\o\2\4\h\s\d\w\7\l\s\k\p\h\n\l\9\q\8\q\2\w\g\m\s\l\b\9\b\w\f\r\8\z\6\h\x\r\y\c\m\7\h\6\4\f\e\6\k\w\b\1\1\g\4\e\z\v\w\z\5\4\q\8\h\1\6\3\f\q\v\n\p\f\c\j\3\m\u\2\c\3\5\k\y\t\q\a\z\c\p\v\d\f\y\n\4\v\g\5\e\6\3\q\u\s\5\e\m\2\5\u\a\z\4\6\k\v\k\8\2\9\3\7\g\f\x\b\i\5\i\a\1\6\f\x\w\1\o\p\d\x\l\k\t\y\3\1\0\v\y\d\c\a\4\4\1\v\5\h\9\p\0\6\z\m\q\t\o\1\o\6\0\p\r\8\i\h\c\3\3\v\8\c\r\z\l\5\d\6\o\t\1\d\5\h\o\n\u\p\z\f\m\7\h\m\v\t\p\9\f\h\l\i\o\m\y\i\4\h\p\5\2\s\q\u\2\r\g\s\2\z\y\k\l\8\8\3\0\g\n\4\y\l\p\8\7\3\c\r\0\q\j\6\9\7\s\x\l\h\v\t\1\q\2\0\9\i\7\o\3\v\u\0\y\9\8\p\f\2\5\u\f\j\q\a\r\d\l\x\4\1\e\l\d\f\a\a\6\c\g\3\w\y\s\5\4\n\9\v\5\8\b\v\q\d\w\h\p\b\a\h\r\1\y\8\o\b\4\7\q\y\u\y\r\i\5\c\j\y\g\x\l\s\k\z\u\q\2\b\5\q\c\l\s\y\6\7\b\k\r\t\2\b\k\x\a\w\s\o\y\t\1\z\o\a\i\9\z\p\9\0\j\p\7\i\m\w\e\j\9\f\x\h\1\w\u\z\1\0\w\u\g\3\3\v\q\a\f\a\q\t\4\m\h\w\h\o\h\x\o\a\s\r\r\t\d\o\w\t\s\3\7\f\a\k\e\u\5\o\8\h\o\7\b\6\j\p\d\s\k\3\k\f\r\x\5\r\h\u\y\9\5\b\f\e\m\5\c\m\e\2\6\y\q\z\4\r\b\6\c\m\u\s\i\w\0\x\5\9\3\3\q\i\f\v\o\1\t\c\3\p\a\f\6\2\a\d\9\k\5\2\l\k\v\3\s\3\y\k\g\n\9\0\f\x\p\u\e\7\6\3\m\8\e\t\z\t\u\5\6\k\p\k\u\t\2\t\0\t\0\w\a\j\g\c\o\a\9\c\g\u\t\f\6\w\y\r\g\z\n\8\5\v\z\r\j\q\c\l\2\h\p\7\z\i\0\0\4\y\b\n\h\c\z\k\1\i\s\l\n\d\0\u\7\t\p\6\4\3\3\b\3\z\v\7\u\c\p\b\m\q\5\f\7\u\6\t\n\n\j\7\x\w\b\c\p\4\o\n\6\z\x\1\3\9\t\d\e\9\k\m\y\6\h\j\1\0\u\x\l\5\9\a\d\h\e\x\m\2\u\2\0\i\b\5\n\0\x\t\z\i\e\d\9\5\r\a\k\6\1\v\t\9\0\z\q\7\4\1\0\2\4\b\f\6\g\b\3\r\c\q\i\d\1\p\z\m\e\8\o\5\s\c\b\g\7\l\q\u\l\k\w\r\9\p\8\t\g\9\1\x\w\o\8\5\7\o\l\2\t\o\h\r\v\r\i\4\o\t\4\3\z\5\m\a\3\0\3\l\b\i\t\f\d\d\7\e\h\r\o\t\v\e\4\i\d\e\p\l\e\p\1\z\r\i\w\9\p\m\3\o\r\u\8\m\s\a\1\z\z\k\r\h\x\f\d\7\d\k\q\z\0\y\r\d\g\d\c\o\m\3\v\l\f\j\1\u\q\n\o\6\4\0\c\w\z\e\j\c\i\j\z\l\2\x\o\9\4\0\q\a\u\d\z\7\3\o\z\4\l\4\o\r\u\w\a\9\7\l\8\d\9\c\a\7\p\m\2\s\x\5\d\v\n\7\5\g\e\e\c\m\8\y\x\o\f\v\c\c\t\w\5\u\t\z\n\q\4\4\j\c\v\m\o\i\l\7\b\a\q\m\5\p\v\8\n\l\m\4\f\i\7\5\p\7\p\k\n\i\y\b\z\9\y\e\q\x\1\7\q\3\5\y\w\s\o\q\a\w\a\k\b\7\t\8\5\2\l\4\c\j\a\u\1\d\5\y\v\v\v\9\k\c\1\b\z\f\s\9\b\j\i\2\z\2\e\r\p\4\n\8\f\j\f\v\f\2\d\z\z\y\m\u\7\g\u\5\x\m\3\t\7\s\v\w\w\e\w\7\8\w\1\c\3\8\k\y\5\a\3\7\n\p\b\q\8\p\5\j\2\7\q\4\c\u\a\p\b\s\c\9\r\7\r\6\u\i\w\d\s\o\l\i\9\x\h\0\0\p\3\f\s\1\4\h\9\o\a\4\g\1\k\p\i\g\1\u\p\j\q\8\9\7\o\4\2\a\7\i\8\s\m\t\s\8\q\w\q\0\l\6\1\7\y\a\x\4\6\9\p\c\9\0\k\z\9\8\5\q\u\2\w\3\v\o\r\8\5\x\m\0\y\x\k\6\u\7\g\m\7\6\y\s\b\l\1\9\d\i\w\0\t\2\3\g\3\g\m\j\8\v\5\8\i\a\i\x\h\n\y\5\5\z\e\i\u\f\z\b\2\i\9\b\o\w\f\w\m\2\g\a\c\a\l\p\7\p\r\x\h\q\4\m\c\c\d\z\2\u\0\l\c\i\x\h\h\l\y\4\4\m\e\y\q\0\v\1\q\k\b\9\7\e\2\f\8\x\n\u\p\y\1\w\1\t\r\v\k\r\j\6\o\s\t\v\s\q\k\a\z\v\k\c\w\2\y\h\o\x\8\9\x\c\7\b\h\u\c\r\w\y\6\c\q\a\n\s\u\1\h\g\q\i\3\x\2\h\h\6\c\5\7\b\j\6\6\h\3\4\5\h\l\r\3\6\e\i\g\x\a\0\p\x\1\i\q\y\7\j\c\d\z\g\w\k\o\f\m\r\e\y\0\w\g\9\i\2\i\u\k\l\m\c\x\e\3\n\1\e\v\4\y\k\q\i\k\d\m\i\8\h\g\j\s\q\v\7\w\5\7\g\o\w\h\h\n\d\f\7\k\k\i\4\2\g\8\5\e\s\w\x\e\k\5\3\3\d\2\i\0\e\d\r\q\a\6\z\m\u\s\t\z\a\d\t\3\p\6\a\9\3\6\a\o\j\2\c\9\2\4\5\j\9\9\a\1\p\a\g\h\a\4\6\8\2\f\8\y\0\2\u\f\g\0\j\6\i\9\n\x\j\h\w\u\g\g\4\5\a\x\n\x\1\0\y\f\l\3\3\8\3\f\8\1\0\0\t\q\i\i\l\3\c\m\f\p\z\a\y\b\9\f\8\5\x\i\8\8\1\5\p\v\m\t\f\b\p\m\9\k\q\e\z\a\2\a\j\a\l\a\n\w\n\r\r\e\f\m\r\7\a\1\v\2\s\z\h\c\2\3\w\4\4\c\8\c\x\e\z\n\3\l\w\3\g\u\p\w\m\s\n\l\w\o\s\o\p\0\0\i\d\y\j\u\w\f\f\v\x\v\1\1\v\q\j\6\q\c\t\d\i\9\f\f\z\a\u\c\i\p\u\m\5\6\b\i\0\x\n\d\d\g\0\8\7\a\2\c\a\x\8\4\e\5\i\8\m\f\w\z\7\p\8\r\g\b\1\0\q\y\u\7\h\c\9\s\m\f\h\h\7\a\5\u\3\i\3\e\s\4\n\r\y\y\m\d\0\i\v\m\o\r\r\6\s\i\1\1\s\u\5\b\i\1\b\n\i\0\v\9\w\h\x\s\q\y\8\q\7\v\c\z\v\n\j\5\6\e\r\m\k\w\8\4\9\7\r\8\s\f\p\0\6\m\t\i\e\3\i\h\s\v\9\i\5\6\l\h\j\2\i\h\j\6\2\8\l\5\2\w\r\9\h\8\h\8\s\z\p\e\q\t\4\z\i\6\x\9\x\2\m\h\k\q\c\w\w\t\r\6\v\c\h\o\q\f\z\c\v\2\6\w\h\o\b\m\q\5\q\6\3\q\c\9\b\x\0\9\o\e\y\7\7\l\c\m\n\n\7\s\f\r\a\h\t\b\7\l\p\a\l\q\2\6\w\f\w\1\l\e\j\3\i\g\n\d\i\g\d\9\u\c\f\4\a\b\x\x\y\8\o\m\0\5\5\g\v\k\p\s\5\y\t\s\w\k\7\0\b\n\7\7\u\p\u\f\0\7\r\x\5\m\y\c\e\9\i\t\m\r\l\o\c\z\h\f\m\k\g\1\d\5\x\5\a\8\n\g\b\g\m\i\b\z\f\e\s\i\x\v\j\p\v\v\h\j\f\u\k\7\o\5\z\y\f\y\5\l\i\k\6\0\9\1\e\b\1\0\h\l\y\6\e\n\r\p\g\0\w\2\x\n\4\r\w\e\o\o\e\j\7\r\8\k\h\a\i\e\7\3\e\x\1\q\y\d\g\p\a\4\m\d\b\a\g\2\y\j\x\d\d\e\1\8\5\2\o\4\g\c\j\u\6\p\z\w\y\u\z\3\m\m\9\h\9\b\h\c\k\n\z\p\7\y\u\k\z\9\3\c\v\l\0\4\8\o\m\d\q\v\l\y\0\s\8\v\5\9\9\p\s\l\s\2\w\c\7\l\k\r\c\h\d\q\g\z\4\p\1\b\i\5\u\9\3\9\j\s\7\0\k\2\j\p\a\r\6\t\v\l\9\7\x\4\n\v\j\t\g\8\z\d\a\h\g\4\7\q\s\3\2\n\w\k\7\1\1\r\l\d\4\i\d\h\0\4\h\g\4\y\i\y\u\h\d\w\y\s\g\j\s\n\n\u\v\4\o\y\e\l\x\o\o\e\b\p\5\u\8\q\b\c\4\x\k\8\w\c\5\w\s\z\p\w\b\2\4\h\9\u\u\7\r\q\z\t\h\t\u\j\v\i\2\z\g\k\o\q\4\2\7\i\z\8\f\s\b\k\1\0\k\7\v\0\n\i\b\e\4\5\2\y\5\d\t\o\n\m\r\f\h\j\9\s\p\h\m\q\s\y\3\2\e\c\8\v\t\b\f\s\0\y\2\b\6\1\1\n\4\b\x\t\5\o\v\g\n\a\w\z\x\e\r\6\j\5\7\5\g\d\l\z\4\y\g\k\7\0\z\f\k\s\z\q\u\r\o\9\u\k\q\l\8\e\s\m\2\v\g\q\6\p\r\b\0\4\v\y\u\e\x\g\i\6\q\j\x\j\3\v\f\9\5\y\l\2\v\4\i\7\r\0\f\2\a\5\0\a\9\w\f\s\u\4\r\6\v\o\k\3\z\e\e\e\6\b\e\b\3\2\e\q\q\d\0\i\g\s\4\e\n\1\2\v\x\k\2\y\2\9\4\2\9\r\i\9\6\7\h\l\k\j\7\x\f\m\m\4\w\3\r\x\h\4\b\i\l\6\1\u\5\e\n\d\5\w\l\k\c\z\n\f\8\t\y\o\a\v\6\4\x\h\3\9\3\a\b\6\s\9\r\y\i\z\x\h\k\l\j\2\g\j\z\l\6\2\m\w\l\f\r\s\x\j\8\l\8\c\h\i\u\l\k\b\u\k\i\h\g\w\2\d\b\4\5\8\9\0\8\4\b\i\w\v\w\j\h\v\r\v\8\q\i\i\y\y\z\x\8\5\1\8\u\c\q\b\v\n\f\6\3\w\e\r\x\8\b\k\u\m\p\8\9\b\a\o\7\r\0\a\4\p\x\o\d\c\l\j\1\t\t\u\q\b\8\n\2\z\z\s\b\8\b\g\l\5\u\u\u\u\c\g\d\j\c\m\j\z\z\1\t\h\2\5\5\0\a\l\j\c\n\6\v\d\q\f\7\y\6\r\j\7\k\9\7\6\0\o\x\b\i\x\2\f\u\n\2\8\n\6\1\t\q\a\y\i\c\j\w\t\d\a\5\7\1\p\u\n\b\g\f\p\h\5\4\s\j\q\b\p\i\x\b\m\e\l\8\0\q\9\0\5\p\9\l\y\f\2\4\m\y\1\5\t\a\k\4\j\m\a\m\f\b\h\v\6\w\d\d\c\o\0\f\0\1\4\y\t\t\q\i\9\3\6 ]] 00:25:31.344 00:25:31.344 real 0m1.974s 00:25:31.344 user 0m1.249s 00:25:31.344 sys 0m0.603s 00:25:31.344 21:21:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:31.344 21:21:53 -- common/autotest_common.sh@10 -- # set +x 00:25:31.344 21:21:53 -- dd/basic_rw.sh@1 -- # cleanup 00:25:31.344 21:21:53 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:25:31.344 21:21:53 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:31.344 21:21:53 -- dd/common.sh@11 -- # local nvme_ref= 00:25:31.344 21:21:53 -- dd/common.sh@12 -- # local size=0xffff 00:25:31.344 21:21:53 -- dd/common.sh@14 -- # local bs=1048576 00:25:31.344 21:21:53 -- dd/common.sh@15 -- # local count=1 00:25:31.344 21:21:53 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:31.344 21:21:53 -- dd/common.sh@18 -- # gen_conf 00:25:31.344 21:21:53 -- dd/common.sh@31 -- # xtrace_disable 00:25:31.344 21:21:53 -- common/autotest_common.sh@10 -- # set +x 00:25:31.344 [2024-06-07 21:21:53.974412] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:31.344 [2024-06-07 21:21:53.974665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147956 ] 00:25:31.344 { 00:25:31.344 "subsystems": [ 00:25:31.344 { 00:25:31.344 "subsystem": "bdev", 00:25:31.344 "config": [ 00:25:31.344 { 00:25:31.344 "params": { 00:25:31.344 "trtype": "pcie", 00:25:31.344 "traddr": "0000:00:06.0", 00:25:31.344 "name": "Nvme0" 00:25:31.344 }, 00:25:31.344 "method": "bdev_nvme_attach_controller" 00:25:31.344 }, 00:25:31.344 { 00:25:31.344 "method": "bdev_wait_for_examine" 00:25:31.344 } 00:25:31.344 ] 00:25:31.344 } 00:25:31.344 ] 00:25:31.344 } 00:25:31.604 [2024-06-07 21:21:54.138223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.604 [2024-06-07 21:21:54.242009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.429  Copying: 1024/1024 [kB] (average 500 MBps) 00:25:32.429 00:25:32.429 21:21:54 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:32.429 00:25:32.429 real 0m25.151s 00:25:32.429 user 0m17.106s 00:25:32.429 sys 0m6.504s 00:25:32.429 21:21:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:32.429 ************************************ 00:25:32.429 END TEST spdk_dd_basic_rw 00:25:32.429 ************************************ 00:25:32.429 21:21:54 -- common/autotest_common.sh@10 -- # set +x 00:25:32.429 21:21:54 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:25:32.429 21:21:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:32.429 21:21:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:32.429 21:21:54 -- common/autotest_common.sh@10 -- # set +x 00:25:32.429 ************************************ 00:25:32.429 START TEST spdk_dd_posix 00:25:32.429 ************************************ 00:25:32.429 21:21:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:25:32.429 * Looking for test storage... 00:25:32.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:32.429 21:21:55 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:32.429 21:21:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.429 21:21:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.429 21:21:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.429 21:21:55 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:32.429 21:21:55 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:32.429 21:21:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:32.429 21:21:55 -- paths/export.sh@5 -- # export PATH 00:25:32.429 21:21:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:32.429 21:21:55 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:25:32.429 21:21:55 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:25:32.429 21:21:55 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:25:32.429 21:21:55 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:25:32.429 21:21:55 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:32.429 21:21:55 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:32.429 21:21:55 -- dd/posix.sh@130 -- # tests 00:25:32.429 21:21:55 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:25:32.429 * First test run, using AIO 00:25:32.429 21:21:55 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:25:32.429 21:21:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:32.429 21:21:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:32.429 21:21:55 -- common/autotest_common.sh@10 -- # set +x 00:25:32.429 ************************************ 00:25:32.429 START TEST dd_flag_append 00:25:32.429 ************************************ 00:25:32.429 21:21:55 -- common/autotest_common.sh@1104 -- # append 00:25:32.429 21:21:55 -- dd/posix.sh@16 -- # local dump0 00:25:32.429 21:21:55 -- dd/posix.sh@17 -- # local dump1 00:25:32.429 21:21:55 -- dd/posix.sh@19 -- # gen_bytes 32 00:25:32.429 21:21:55 -- dd/common.sh@98 -- # xtrace_disable 00:25:32.429 21:21:55 -- common/autotest_common.sh@10 -- # set +x 00:25:32.429 21:21:55 -- dd/posix.sh@19 -- # dump0=08y8thxmta2dak8bensums50hgminxsd 00:25:32.429 21:21:55 -- dd/posix.sh@20 -- # gen_bytes 32 00:25:32.429 21:21:55 -- dd/common.sh@98 -- # xtrace_disable 00:25:32.429 21:21:55 -- common/autotest_common.sh@10 -- # set +x 00:25:32.429 21:21:55 -- dd/posix.sh@20 -- # dump1=pa8keq75bcv7e6uvlxd51eyfvtgmyh73 00:25:32.429 21:21:55 -- dd/posix.sh@22 -- # printf %s 08y8thxmta2dak8bensums50hgminxsd 00:25:32.429 21:21:55 -- dd/posix.sh@23 -- # printf %s pa8keq75bcv7e6uvlxd51eyfvtgmyh73 00:25:32.429 21:21:55 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:25:32.688 [2024-06-07 21:21:55.129464] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:32.688 [2024-06-07 21:21:55.130600] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148025 ] 00:25:32.688 [2024-06-07 21:21:55.309296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.946 [2024-06-07 21:21:55.420989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.514  Copying: 32/32 [B] (average 31 kBps) 00:25:33.514 00:25:33.514 21:21:55 -- dd/posix.sh@27 -- # [[ pa8keq75bcv7e6uvlxd51eyfvtgmyh7308y8thxmta2dak8bensums50hgminxsd == \p\a\8\k\e\q\7\5\b\c\v\7\e\6\u\v\l\x\d\5\1\e\y\f\v\t\g\m\y\h\7\3\0\8\y\8\t\h\x\m\t\a\2\d\a\k\8\b\e\n\s\u\m\s\5\0\h\g\m\i\n\x\s\d ]] 00:25:33.514 ************************************ 00:25:33.514 END TEST dd_flag_append 00:25:33.514 00:25:33.514 real 0m0.932s 00:25:33.514 user 0m0.510s 00:25:33.514 sys 0m0.280s 00:25:33.514 21:21:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:33.514 21:21:55 -- common/autotest_common.sh@10 -- # set +x 00:25:33.514 ************************************ 00:25:33.514 21:21:56 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:25:33.514 21:21:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:33.514 21:21:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:33.514 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:25:33.514 ************************************ 00:25:33.514 START TEST dd_flag_directory 00:25:33.514 ************************************ 00:25:33.514 21:21:56 -- common/autotest_common.sh@1104 -- # directory 00:25:33.514 21:21:56 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:33.514 21:21:56 -- common/autotest_common.sh@640 -- # local es=0 00:25:33.514 21:21:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:33.514 21:21:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:33.514 21:21:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:33.514 21:21:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:33.514 21:21:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:33.514 21:21:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:33.514 21:21:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:33.514 21:21:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:33.514 21:21:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:33.514 21:21:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:33.514 [2024-06-07 21:21:56.102516] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:33.514 [2024-06-07 21:21:56.103332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148082 ] 00:25:33.772 [2024-06-07 21:21:56.272569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.772 [2024-06-07 21:21:56.376163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.031 [2024-06-07 21:21:56.501723] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:34.031 [2024-06-07 21:21:56.501860] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:34.031 [2024-06-07 21:21:56.501892] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:34.031 [2024-06-07 21:21:56.697295] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:34.290 21:21:56 -- common/autotest_common.sh@643 -- # es=236 00:25:34.290 21:21:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:34.290 21:21:56 -- common/autotest_common.sh@652 -- # es=108 00:25:34.290 21:21:56 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:34.290 21:21:56 -- common/autotest_common.sh@660 -- # es=1 00:25:34.290 21:21:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:34.290 21:21:56 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:34.290 21:21:56 -- common/autotest_common.sh@640 -- # local es=0 00:25:34.290 21:21:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:34.290 21:21:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:34.290 21:21:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:34.290 21:21:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:34.290 21:21:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:34.290 21:21:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:34.290 21:21:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:34.290 21:21:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:34.290 21:21:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:34.290 21:21:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:34.290 [2024-06-07 21:21:56.953193] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:34.290 [2024-06-07 21:21:56.953468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148103 ] 00:25:34.549 [2024-06-07 21:21:57.124115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.808 [2024-06-07 21:21:57.236074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.808 [2024-06-07 21:21:57.366745] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:34.808 [2024-06-07 21:21:57.366886] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:34.808 [2024-06-07 21:21:57.366918] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:35.066 [2024-06-07 21:21:57.566281] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:35.326 21:21:57 -- common/autotest_common.sh@643 -- # es=236 00:25:35.326 21:21:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:35.326 21:21:57 -- common/autotest_common.sh@652 -- # es=108 00:25:35.326 21:21:57 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:35.326 21:21:57 -- common/autotest_common.sh@660 -- # es=1 00:25:35.326 21:21:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:35.326 00:25:35.326 real 0m1.724s 00:25:35.326 user 0m0.993s 00:25:35.326 sys 0m0.530s 00:25:35.326 21:21:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:35.326 21:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:35.326 ************************************ 00:25:35.326 END TEST dd_flag_directory 00:25:35.326 ************************************ 00:25:35.326 21:21:57 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:25:35.326 21:21:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:35.326 21:21:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:35.326 21:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:35.326 ************************************ 00:25:35.326 START TEST dd_flag_nofollow 00:25:35.326 ************************************ 00:25:35.326 21:21:57 -- common/autotest_common.sh@1104 -- # nofollow 00:25:35.326 21:21:57 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:35.326 21:21:57 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:35.326 21:21:57 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:35.326 21:21:57 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:35.326 21:21:57 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:35.326 21:21:57 -- common/autotest_common.sh@640 -- # local es=0 00:25:35.326 21:21:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:35.326 21:21:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:35.326 21:21:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:35.326 21:21:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:35.326 21:21:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:35.326 21:21:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:35.326 21:21:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:35.326 21:21:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:35.326 21:21:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:35.326 21:21:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:35.326 [2024-06-07 21:21:57.880795] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:35.326 [2024-06-07 21:21:57.881050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148142 ] 00:25:35.585 [2024-06-07 21:21:58.049180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.585 [2024-06-07 21:21:58.155706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.843 [2024-06-07 21:21:58.276676] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:35.843 [2024-06-07 21:21:58.276823] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:35.843 [2024-06-07 21:21:58.276861] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:35.843 [2024-06-07 21:21:58.455473] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:36.102 21:21:58 -- common/autotest_common.sh@643 -- # es=216 00:25:36.102 21:21:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:36.102 21:21:58 -- common/autotest_common.sh@652 -- # es=88 00:25:36.102 21:21:58 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:36.102 21:21:58 -- common/autotest_common.sh@660 -- # es=1 00:25:36.102 21:21:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:36.102 21:21:58 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:36.102 21:21:58 -- common/autotest_common.sh@640 -- # local es=0 00:25:36.102 21:21:58 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:36.102 21:21:58 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:36.102 21:21:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:36.102 21:21:58 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:36.102 21:21:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:36.102 21:21:58 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:36.102 21:21:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:36.102 21:21:58 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:36.102 21:21:58 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:36.102 21:21:58 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:36.102 [2024-06-07 21:21:58.713654] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:36.102 [2024-06-07 21:21:58.714176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148148 ] 00:25:36.361 [2024-06-07 21:21:58.897319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.361 [2024-06-07 21:21:58.991792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.618 [2024-06-07 21:21:59.117788] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:36.618 [2024-06-07 21:21:59.117904] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:36.618 [2024-06-07 21:21:59.117935] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:36.876 [2024-06-07 21:21:59.315209] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:36.876 21:21:59 -- common/autotest_common.sh@643 -- # es=216 00:25:36.876 21:21:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:36.876 21:21:59 -- common/autotest_common.sh@652 -- # es=88 00:25:36.876 21:21:59 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:36.876 21:21:59 -- common/autotest_common.sh@660 -- # es=1 00:25:36.876 21:21:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:36.876 21:21:59 -- dd/posix.sh@46 -- # gen_bytes 512 00:25:36.876 21:21:59 -- dd/common.sh@98 -- # xtrace_disable 00:25:36.876 21:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:36.876 21:21:59 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:36.876 [2024-06-07 21:21:59.546335] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:36.876 [2024-06-07 21:21:59.546611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148165 ] 00:25:37.133 [2024-06-07 21:21:59.714425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.395 [2024-06-07 21:21:59.811935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.963  Copying: 512/512 [B] (average 500 kBps) 00:25:37.963 00:25:37.963 21:22:00 -- dd/posix.sh@49 -- # [[ v3e7du7kcdg081y3fe9m26jimvjqszrs1kknhcn8mn3izwt3r9c5fnsiz4xj7rj0ugieoymp73qde3h5fhbtmz1d87h5xhbq40d9yrnoj4i4jxl9covq1uncqcx923stwosy0u8gze0impthd8cmztdn0v8phwr5z300fv9jr09vf00wbxykxfrxj2cp1p4jaeatwbew5bu03gdzov22p472l7mli98jc98sp6bn3lsbwezuf7dw7zxm1y5gqivfc7dy5oakq2c86otsfs49rm5pvb6q4j2ugaugudxumulhs4ujbgsuqmz1ztx6bi7r8gp7cg05qqdwyqo5ouej6vdopl37jo4i2s2cgsw64utgdfw0eplaumluz0e25h04otidbrgcsr0nd9r8tbshuhe24hqdqccziagt7q9t928zfmjknwhmu5170wuz3k6qgve2nzes7jvmsrhk2l6mtei7sxbv515poyq85br2cx5khs78jiuwtj23p5y97mg6 == \v\3\e\7\d\u\7\k\c\d\g\0\8\1\y\3\f\e\9\m\2\6\j\i\m\v\j\q\s\z\r\s\1\k\k\n\h\c\n\8\m\n\3\i\z\w\t\3\r\9\c\5\f\n\s\i\z\4\x\j\7\r\j\0\u\g\i\e\o\y\m\p\7\3\q\d\e\3\h\5\f\h\b\t\m\z\1\d\8\7\h\5\x\h\b\q\4\0\d\9\y\r\n\o\j\4\i\4\j\x\l\9\c\o\v\q\1\u\n\c\q\c\x\9\2\3\s\t\w\o\s\y\0\u\8\g\z\e\0\i\m\p\t\h\d\8\c\m\z\t\d\n\0\v\8\p\h\w\r\5\z\3\0\0\f\v\9\j\r\0\9\v\f\0\0\w\b\x\y\k\x\f\r\x\j\2\c\p\1\p\4\j\a\e\a\t\w\b\e\w\5\b\u\0\3\g\d\z\o\v\2\2\p\4\7\2\l\7\m\l\i\9\8\j\c\9\8\s\p\6\b\n\3\l\s\b\w\e\z\u\f\7\d\w\7\z\x\m\1\y\5\g\q\i\v\f\c\7\d\y\5\o\a\k\q\2\c\8\6\o\t\s\f\s\4\9\r\m\5\p\v\b\6\q\4\j\2\u\g\a\u\g\u\d\x\u\m\u\l\h\s\4\u\j\b\g\s\u\q\m\z\1\z\t\x\6\b\i\7\r\8\g\p\7\c\g\0\5\q\q\d\w\y\q\o\5\o\u\e\j\6\v\d\o\p\l\3\7\j\o\4\i\2\s\2\c\g\s\w\6\4\u\t\g\d\f\w\0\e\p\l\a\u\m\l\u\z\0\e\2\5\h\0\4\o\t\i\d\b\r\g\c\s\r\0\n\d\9\r\8\t\b\s\h\u\h\e\2\4\h\q\d\q\c\c\z\i\a\g\t\7\q\9\t\9\2\8\z\f\m\j\k\n\w\h\m\u\5\1\7\0\w\u\z\3\k\6\q\g\v\e\2\n\z\e\s\7\j\v\m\s\r\h\k\2\l\6\m\t\e\i\7\s\x\b\v\5\1\5\p\o\y\q\8\5\b\r\2\c\x\5\k\h\s\7\8\j\i\u\w\t\j\2\3\p\5\y\9\7\m\g\6 ]] 00:25:37.963 ************************************ 00:25:37.963 END TEST dd_flag_nofollow 00:25:37.963 ************************************ 00:25:37.963 00:25:37.963 real 0m2.558s 00:25:37.963 user 0m1.412s 00:25:37.963 sys 0m0.806s 00:25:37.963 21:22:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:37.963 21:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:37.963 21:22:00 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:25:37.963 21:22:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:37.963 21:22:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:37.963 21:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:37.963 ************************************ 00:25:37.963 START TEST dd_flag_noatime 00:25:37.963 ************************************ 00:25:37.963 21:22:00 -- common/autotest_common.sh@1104 -- # noatime 00:25:37.963 21:22:00 -- dd/posix.sh@53 -- # local atime_if 00:25:37.963 21:22:00 -- dd/posix.sh@54 -- # local atime_of 00:25:37.963 21:22:00 -- dd/posix.sh@58 -- # gen_bytes 512 00:25:37.963 21:22:00 -- dd/common.sh@98 -- # xtrace_disable 00:25:37.963 21:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:37.963 21:22:00 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:37.963 21:22:00 -- dd/posix.sh@60 -- # atime_if=1717795319 00:25:37.963 21:22:00 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:37.963 21:22:00 -- dd/posix.sh@61 -- # atime_of=1717795320 00:25:37.963 21:22:00 -- dd/posix.sh@66 -- # sleep 1 00:25:38.898 21:22:01 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:38.898 [2024-06-07 21:22:01.495463] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:38.898 [2024-06-07 21:22:01.495655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148217 ] 00:25:39.157 [2024-06-07 21:22:01.658724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.157 [2024-06-07 21:22:01.757765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.674  Copying: 512/512 [B] (average 500 kBps) 00:25:39.674 00:25:39.674 21:22:02 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:39.674 21:22:02 -- dd/posix.sh@69 -- # (( atime_if == 1717795319 )) 00:25:39.674 21:22:02 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:39.674 21:22:02 -- dd/posix.sh@70 -- # (( atime_of == 1717795320 )) 00:25:39.674 21:22:02 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:39.932 [2024-06-07 21:22:02.357719] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:39.933 [2024-06-07 21:22:02.357936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148236 ] 00:25:39.933 [2024-06-07 21:22:02.517674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.933 [2024-06-07 21:22:02.602851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.784  Copying: 512/512 [B] (average 500 kBps) 00:25:40.784 00:25:40.784 21:22:03 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:40.784 21:22:03 -- dd/posix.sh@73 -- # (( atime_if < 1717795322 )) 00:25:40.784 00:25:40.784 real 0m2.742s 00:25:40.784 user 0m0.962s 00:25:40.784 sys 0m0.511s 00:25:40.784 21:22:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.784 ************************************ 00:25:40.784 END TEST dd_flag_noatime 00:25:40.784 ************************************ 00:25:40.784 21:22:03 -- common/autotest_common.sh@10 -- # set +x 00:25:40.784 21:22:03 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:25:40.784 21:22:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:40.784 21:22:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:40.784 21:22:03 -- common/autotest_common.sh@10 -- # set +x 00:25:40.784 ************************************ 00:25:40.784 START TEST dd_flags_misc 00:25:40.784 ************************************ 00:25:40.784 21:22:03 -- common/autotest_common.sh@1104 -- # io 00:25:40.784 21:22:03 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:25:40.784 21:22:03 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:25:40.784 21:22:03 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:25:40.784 21:22:03 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:40.784 21:22:03 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:40.784 21:22:03 -- dd/common.sh@98 -- # xtrace_disable 00:25:40.784 21:22:03 -- common/autotest_common.sh@10 -- # set +x 00:25:40.784 21:22:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:40.784 21:22:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:40.784 [2024-06-07 21:22:03.299364] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:40.784 [2024-06-07 21:22:03.299854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148272 ] 00:25:41.042 [2024-06-07 21:22:03.480895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.042 [2024-06-07 21:22:03.571508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.608  Copying: 512/512 [B] (average 500 kBps) 00:25:41.608 00:25:41.608 21:22:04 -- dd/posix.sh@93 -- # [[ yr8fcb5jcq2yz3l6rdfb0obammnmcr99pu3ryas6wud62fbg0rshqihwpw994u0oer0j9d34b1kfihp21zgx1zff83p076kigjlc6due9w1c5xmu29krkadgyt30yxqnc57ivm6ugdm6t4ifxkturz77t5dhjr50w1adgvsq5nlm09y2i02lnmq9m2v29dy9cq0ad9wvffv8zbrj6y3xw5hc0ahqwpcbj2uw4u434fdxcaayb69sspanguszthicxck04r1ouu11b94hag4l5ptzbsmmwy15fhoitb2us3gw0w5vcqfqbx19bjgzaqqj7cv5kslr16cdlib8gcx11xgbgxc2xankywbdqdi8ulikzi0b6kba8hrzoboxyxlntn9ucbdfqkhr83dhw6smowx6vj6w45d3jyg5mryesf7xrycakqr8s6xubw0n5fychqt34e44qyz79bccvz87dvew8gowmxtqtatxtlsascnc72dtccowzx0wae8qaekt == \y\r\8\f\c\b\5\j\c\q\2\y\z\3\l\6\r\d\f\b\0\o\b\a\m\m\n\m\c\r\9\9\p\u\3\r\y\a\s\6\w\u\d\6\2\f\b\g\0\r\s\h\q\i\h\w\p\w\9\9\4\u\0\o\e\r\0\j\9\d\3\4\b\1\k\f\i\h\p\2\1\z\g\x\1\z\f\f\8\3\p\0\7\6\k\i\g\j\l\c\6\d\u\e\9\w\1\c\5\x\m\u\2\9\k\r\k\a\d\g\y\t\3\0\y\x\q\n\c\5\7\i\v\m\6\u\g\d\m\6\t\4\i\f\x\k\t\u\r\z\7\7\t\5\d\h\j\r\5\0\w\1\a\d\g\v\s\q\5\n\l\m\0\9\y\2\i\0\2\l\n\m\q\9\m\2\v\2\9\d\y\9\c\q\0\a\d\9\w\v\f\f\v\8\z\b\r\j\6\y\3\x\w\5\h\c\0\a\h\q\w\p\c\b\j\2\u\w\4\u\4\3\4\f\d\x\c\a\a\y\b\6\9\s\s\p\a\n\g\u\s\z\t\h\i\c\x\c\k\0\4\r\1\o\u\u\1\1\b\9\4\h\a\g\4\l\5\p\t\z\b\s\m\m\w\y\1\5\f\h\o\i\t\b\2\u\s\3\g\w\0\w\5\v\c\q\f\q\b\x\1\9\b\j\g\z\a\q\q\j\7\c\v\5\k\s\l\r\1\6\c\d\l\i\b\8\g\c\x\1\1\x\g\b\g\x\c\2\x\a\n\k\y\w\b\d\q\d\i\8\u\l\i\k\z\i\0\b\6\k\b\a\8\h\r\z\o\b\o\x\y\x\l\n\t\n\9\u\c\b\d\f\q\k\h\r\8\3\d\h\w\6\s\m\o\w\x\6\v\j\6\w\4\5\d\3\j\y\g\5\m\r\y\e\s\f\7\x\r\y\c\a\k\q\r\8\s\6\x\u\b\w\0\n\5\f\y\c\h\q\t\3\4\e\4\4\q\y\z\7\9\b\c\c\v\z\8\7\d\v\e\w\8\g\o\w\m\x\t\q\t\a\t\x\t\l\s\a\s\c\n\c\7\2\d\t\c\c\o\w\z\x\0\w\a\e\8\q\a\e\k\t ]] 00:25:41.608 21:22:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:41.608 21:22:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:41.608 [2024-06-07 21:22:04.162315] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:41.608 [2024-06-07 21:22:04.162566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148292 ] 00:25:41.866 [2024-06-07 21:22:04.320604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.866 [2024-06-07 21:22:04.425004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.381  Copying: 512/512 [B] (average 500 kBps) 00:25:42.381 00:25:42.381 21:22:04 -- dd/posix.sh@93 -- # [[ yr8fcb5jcq2yz3l6rdfb0obammnmcr99pu3ryas6wud62fbg0rshqihwpw994u0oer0j9d34b1kfihp21zgx1zff83p076kigjlc6due9w1c5xmu29krkadgyt30yxqnc57ivm6ugdm6t4ifxkturz77t5dhjr50w1adgvsq5nlm09y2i02lnmq9m2v29dy9cq0ad9wvffv8zbrj6y3xw5hc0ahqwpcbj2uw4u434fdxcaayb69sspanguszthicxck04r1ouu11b94hag4l5ptzbsmmwy15fhoitb2us3gw0w5vcqfqbx19bjgzaqqj7cv5kslr16cdlib8gcx11xgbgxc2xankywbdqdi8ulikzi0b6kba8hrzoboxyxlntn9ucbdfqkhr83dhw6smowx6vj6w45d3jyg5mryesf7xrycakqr8s6xubw0n5fychqt34e44qyz79bccvz87dvew8gowmxtqtatxtlsascnc72dtccowzx0wae8qaekt == \y\r\8\f\c\b\5\j\c\q\2\y\z\3\l\6\r\d\f\b\0\o\b\a\m\m\n\m\c\r\9\9\p\u\3\r\y\a\s\6\w\u\d\6\2\f\b\g\0\r\s\h\q\i\h\w\p\w\9\9\4\u\0\o\e\r\0\j\9\d\3\4\b\1\k\f\i\h\p\2\1\z\g\x\1\z\f\f\8\3\p\0\7\6\k\i\g\j\l\c\6\d\u\e\9\w\1\c\5\x\m\u\2\9\k\r\k\a\d\g\y\t\3\0\y\x\q\n\c\5\7\i\v\m\6\u\g\d\m\6\t\4\i\f\x\k\t\u\r\z\7\7\t\5\d\h\j\r\5\0\w\1\a\d\g\v\s\q\5\n\l\m\0\9\y\2\i\0\2\l\n\m\q\9\m\2\v\2\9\d\y\9\c\q\0\a\d\9\w\v\f\f\v\8\z\b\r\j\6\y\3\x\w\5\h\c\0\a\h\q\w\p\c\b\j\2\u\w\4\u\4\3\4\f\d\x\c\a\a\y\b\6\9\s\s\p\a\n\g\u\s\z\t\h\i\c\x\c\k\0\4\r\1\o\u\u\1\1\b\9\4\h\a\g\4\l\5\p\t\z\b\s\m\m\w\y\1\5\f\h\o\i\t\b\2\u\s\3\g\w\0\w\5\v\c\q\f\q\b\x\1\9\b\j\g\z\a\q\q\j\7\c\v\5\k\s\l\r\1\6\c\d\l\i\b\8\g\c\x\1\1\x\g\b\g\x\c\2\x\a\n\k\y\w\b\d\q\d\i\8\u\l\i\k\z\i\0\b\6\k\b\a\8\h\r\z\o\b\o\x\y\x\l\n\t\n\9\u\c\b\d\f\q\k\h\r\8\3\d\h\w\6\s\m\o\w\x\6\v\j\6\w\4\5\d\3\j\y\g\5\m\r\y\e\s\f\7\x\r\y\c\a\k\q\r\8\s\6\x\u\b\w\0\n\5\f\y\c\h\q\t\3\4\e\4\4\q\y\z\7\9\b\c\c\v\z\8\7\d\v\e\w\8\g\o\w\m\x\t\q\t\a\t\x\t\l\s\a\s\c\n\c\7\2\d\t\c\c\o\w\z\x\0\w\a\e\8\q\a\e\k\t ]] 00:25:42.381 21:22:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:42.381 21:22:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:42.381 [2024-06-07 21:22:05.032696] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:42.381 [2024-06-07 21:22:05.032959] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148305 ] 00:25:42.640 [2024-06-07 21:22:05.188617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.640 [2024-06-07 21:22:05.284708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.158  Copying: 512/512 [B] (average 166 kBps) 00:25:43.158 00:25:43.158 21:22:05 -- dd/posix.sh@93 -- # [[ yr8fcb5jcq2yz3l6rdfb0obammnmcr99pu3ryas6wud62fbg0rshqihwpw994u0oer0j9d34b1kfihp21zgx1zff83p076kigjlc6due9w1c5xmu29krkadgyt30yxqnc57ivm6ugdm6t4ifxkturz77t5dhjr50w1adgvsq5nlm09y2i02lnmq9m2v29dy9cq0ad9wvffv8zbrj6y3xw5hc0ahqwpcbj2uw4u434fdxcaayb69sspanguszthicxck04r1ouu11b94hag4l5ptzbsmmwy15fhoitb2us3gw0w5vcqfqbx19bjgzaqqj7cv5kslr16cdlib8gcx11xgbgxc2xankywbdqdi8ulikzi0b6kba8hrzoboxyxlntn9ucbdfqkhr83dhw6smowx6vj6w45d3jyg5mryesf7xrycakqr8s6xubw0n5fychqt34e44qyz79bccvz87dvew8gowmxtqtatxtlsascnc72dtccowzx0wae8qaekt == \y\r\8\f\c\b\5\j\c\q\2\y\z\3\l\6\r\d\f\b\0\o\b\a\m\m\n\m\c\r\9\9\p\u\3\r\y\a\s\6\w\u\d\6\2\f\b\g\0\r\s\h\q\i\h\w\p\w\9\9\4\u\0\o\e\r\0\j\9\d\3\4\b\1\k\f\i\h\p\2\1\z\g\x\1\z\f\f\8\3\p\0\7\6\k\i\g\j\l\c\6\d\u\e\9\w\1\c\5\x\m\u\2\9\k\r\k\a\d\g\y\t\3\0\y\x\q\n\c\5\7\i\v\m\6\u\g\d\m\6\t\4\i\f\x\k\t\u\r\z\7\7\t\5\d\h\j\r\5\0\w\1\a\d\g\v\s\q\5\n\l\m\0\9\y\2\i\0\2\l\n\m\q\9\m\2\v\2\9\d\y\9\c\q\0\a\d\9\w\v\f\f\v\8\z\b\r\j\6\y\3\x\w\5\h\c\0\a\h\q\w\p\c\b\j\2\u\w\4\u\4\3\4\f\d\x\c\a\a\y\b\6\9\s\s\p\a\n\g\u\s\z\t\h\i\c\x\c\k\0\4\r\1\o\u\u\1\1\b\9\4\h\a\g\4\l\5\p\t\z\b\s\m\m\w\y\1\5\f\h\o\i\t\b\2\u\s\3\g\w\0\w\5\v\c\q\f\q\b\x\1\9\b\j\g\z\a\q\q\j\7\c\v\5\k\s\l\r\1\6\c\d\l\i\b\8\g\c\x\1\1\x\g\b\g\x\c\2\x\a\n\k\y\w\b\d\q\d\i\8\u\l\i\k\z\i\0\b\6\k\b\a\8\h\r\z\o\b\o\x\y\x\l\n\t\n\9\u\c\b\d\f\q\k\h\r\8\3\d\h\w\6\s\m\o\w\x\6\v\j\6\w\4\5\d\3\j\y\g\5\m\r\y\e\s\f\7\x\r\y\c\a\k\q\r\8\s\6\x\u\b\w\0\n\5\f\y\c\h\q\t\3\4\e\4\4\q\y\z\7\9\b\c\c\v\z\8\7\d\v\e\w\8\g\o\w\m\x\t\q\t\a\t\x\t\l\s\a\s\c\n\c\7\2\d\t\c\c\o\w\z\x\0\w\a\e\8\q\a\e\k\t ]] 00:25:43.158 21:22:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:43.158 21:22:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:43.417 [2024-06-07 21:22:05.884828] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:43.417 [2024-06-07 21:22:05.885677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148338 ] 00:25:43.417 [2024-06-07 21:22:06.048342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.676 [2024-06-07 21:22:06.111201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.936  Copying: 512/512 [B] (average 166 kBps) 00:25:43.936 00:25:43.936 21:22:06 -- dd/posix.sh@93 -- # [[ yr8fcb5jcq2yz3l6rdfb0obammnmcr99pu3ryas6wud62fbg0rshqihwpw994u0oer0j9d34b1kfihp21zgx1zff83p076kigjlc6due9w1c5xmu29krkadgyt30yxqnc57ivm6ugdm6t4ifxkturz77t5dhjr50w1adgvsq5nlm09y2i02lnmq9m2v29dy9cq0ad9wvffv8zbrj6y3xw5hc0ahqwpcbj2uw4u434fdxcaayb69sspanguszthicxck04r1ouu11b94hag4l5ptzbsmmwy15fhoitb2us3gw0w5vcqfqbx19bjgzaqqj7cv5kslr16cdlib8gcx11xgbgxc2xankywbdqdi8ulikzi0b6kba8hrzoboxyxlntn9ucbdfqkhr83dhw6smowx6vj6w45d3jyg5mryesf7xrycakqr8s6xubw0n5fychqt34e44qyz79bccvz87dvew8gowmxtqtatxtlsascnc72dtccowzx0wae8qaekt == \y\r\8\f\c\b\5\j\c\q\2\y\z\3\l\6\r\d\f\b\0\o\b\a\m\m\n\m\c\r\9\9\p\u\3\r\y\a\s\6\w\u\d\6\2\f\b\g\0\r\s\h\q\i\h\w\p\w\9\9\4\u\0\o\e\r\0\j\9\d\3\4\b\1\k\f\i\h\p\2\1\z\g\x\1\z\f\f\8\3\p\0\7\6\k\i\g\j\l\c\6\d\u\e\9\w\1\c\5\x\m\u\2\9\k\r\k\a\d\g\y\t\3\0\y\x\q\n\c\5\7\i\v\m\6\u\g\d\m\6\t\4\i\f\x\k\t\u\r\z\7\7\t\5\d\h\j\r\5\0\w\1\a\d\g\v\s\q\5\n\l\m\0\9\y\2\i\0\2\l\n\m\q\9\m\2\v\2\9\d\y\9\c\q\0\a\d\9\w\v\f\f\v\8\z\b\r\j\6\y\3\x\w\5\h\c\0\a\h\q\w\p\c\b\j\2\u\w\4\u\4\3\4\f\d\x\c\a\a\y\b\6\9\s\s\p\a\n\g\u\s\z\t\h\i\c\x\c\k\0\4\r\1\o\u\u\1\1\b\9\4\h\a\g\4\l\5\p\t\z\b\s\m\m\w\y\1\5\f\h\o\i\t\b\2\u\s\3\g\w\0\w\5\v\c\q\f\q\b\x\1\9\b\j\g\z\a\q\q\j\7\c\v\5\k\s\l\r\1\6\c\d\l\i\b\8\g\c\x\1\1\x\g\b\g\x\c\2\x\a\n\k\y\w\b\d\q\d\i\8\u\l\i\k\z\i\0\b\6\k\b\a\8\h\r\z\o\b\o\x\y\x\l\n\t\n\9\u\c\b\d\f\q\k\h\r\8\3\d\h\w\6\s\m\o\w\x\6\v\j\6\w\4\5\d\3\j\y\g\5\m\r\y\e\s\f\7\x\r\y\c\a\k\q\r\8\s\6\x\u\b\w\0\n\5\f\y\c\h\q\t\3\4\e\4\4\q\y\z\7\9\b\c\c\v\z\8\7\d\v\e\w\8\g\o\w\m\x\t\q\t\a\t\x\t\l\s\a\s\c\n\c\7\2\d\t\c\c\o\w\z\x\0\w\a\e\8\q\a\e\k\t ]] 00:25:43.936 21:22:06 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:43.936 21:22:06 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:43.936 21:22:06 -- dd/common.sh@98 -- # xtrace_disable 00:25:43.936 21:22:06 -- common/autotest_common.sh@10 -- # set +x 00:25:43.936 21:22:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:43.936 21:22:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:43.936 [2024-06-07 21:22:06.575155] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:43.936 [2024-06-07 21:22:06.575432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148347 ] 00:25:44.194 [2024-06-07 21:22:06.742118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.194 [2024-06-07 21:22:06.804399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.712  Copying: 512/512 [B] (average 500 kBps) 00:25:44.712 00:25:44.713 21:22:07 -- dd/posix.sh@93 -- # [[ wk86qkla3doonki6dzw4wzeyqu4rhv9r3h1ccdrfmw8a2yhb9ckszu0lrp93ma2g6eskx33lhvjjh3qp779qrhjlu14c2v8kwa9vftoyonvk4uxw10twukc95xsmwjpjhnvdqwou6379rxmrnbw44bny7iu4p2bk9z6y14x6qnxq0hxqg5vd99exj4g6a6dmtzxv0126io581bhce9amg895ubj0nkmpiiblj6o5n2xw990o9km3wbwvkky0ase2nj1mhxuj2lqf5io50k6wjkcboth37vsc9qp75o5sdour5hk4ztyk6uwr708o2pkhf847pmew52ho9d3lhenbbnzz8st8uo09o9sks5w3wr9ow52lgw9yomnxs9flye13rkdasjxpz0gndkh1so7qbio5qpvjrgofi2cihl05b0d751kd1bl3ch8psyu10xfnjo7puglkht7d7o82397eg88gdi23bmb8d8veajhcmyszt212pkkcrshwsj4ias48 == \w\k\8\6\q\k\l\a\3\d\o\o\n\k\i\6\d\z\w\4\w\z\e\y\q\u\4\r\h\v\9\r\3\h\1\c\c\d\r\f\m\w\8\a\2\y\h\b\9\c\k\s\z\u\0\l\r\p\9\3\m\a\2\g\6\e\s\k\x\3\3\l\h\v\j\j\h\3\q\p\7\7\9\q\r\h\j\l\u\1\4\c\2\v\8\k\w\a\9\v\f\t\o\y\o\n\v\k\4\u\x\w\1\0\t\w\u\k\c\9\5\x\s\m\w\j\p\j\h\n\v\d\q\w\o\u\6\3\7\9\r\x\m\r\n\b\w\4\4\b\n\y\7\i\u\4\p\2\b\k\9\z\6\y\1\4\x\6\q\n\x\q\0\h\x\q\g\5\v\d\9\9\e\x\j\4\g\6\a\6\d\m\t\z\x\v\0\1\2\6\i\o\5\8\1\b\h\c\e\9\a\m\g\8\9\5\u\b\j\0\n\k\m\p\i\i\b\l\j\6\o\5\n\2\x\w\9\9\0\o\9\k\m\3\w\b\w\v\k\k\y\0\a\s\e\2\n\j\1\m\h\x\u\j\2\l\q\f\5\i\o\5\0\k\6\w\j\k\c\b\o\t\h\3\7\v\s\c\9\q\p\7\5\o\5\s\d\o\u\r\5\h\k\4\z\t\y\k\6\u\w\r\7\0\8\o\2\p\k\h\f\8\4\7\p\m\e\w\5\2\h\o\9\d\3\l\h\e\n\b\b\n\z\z\8\s\t\8\u\o\0\9\o\9\s\k\s\5\w\3\w\r\9\o\w\5\2\l\g\w\9\y\o\m\n\x\s\9\f\l\y\e\1\3\r\k\d\a\s\j\x\p\z\0\g\n\d\k\h\1\s\o\7\q\b\i\o\5\q\p\v\j\r\g\o\f\i\2\c\i\h\l\0\5\b\0\d\7\5\1\k\d\1\b\l\3\c\h\8\p\s\y\u\1\0\x\f\n\j\o\7\p\u\g\l\k\h\t\7\d\7\o\8\2\3\9\7\e\g\8\8\g\d\i\2\3\b\m\b\8\d\8\v\e\a\j\h\c\m\y\s\z\t\2\1\2\p\k\k\c\r\s\h\w\s\j\4\i\a\s\4\8 ]] 00:25:44.713 21:22:07 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:44.713 21:22:07 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:44.713 [2024-06-07 21:22:07.252083] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:44.713 [2024-06-07 21:22:07.252536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148360 ] 00:25:44.971 [2024-06-07 21:22:07.417742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.971 [2024-06-07 21:22:07.473723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.230  Copying: 512/512 [B] (average 500 kBps) 00:25:45.230 00:25:45.230 21:22:07 -- dd/posix.sh@93 -- # [[ wk86qkla3doonki6dzw4wzeyqu4rhv9r3h1ccdrfmw8a2yhb9ckszu0lrp93ma2g6eskx33lhvjjh3qp779qrhjlu14c2v8kwa9vftoyonvk4uxw10twukc95xsmwjpjhnvdqwou6379rxmrnbw44bny7iu4p2bk9z6y14x6qnxq0hxqg5vd99exj4g6a6dmtzxv0126io581bhce9amg895ubj0nkmpiiblj6o5n2xw990o9km3wbwvkky0ase2nj1mhxuj2lqf5io50k6wjkcboth37vsc9qp75o5sdour5hk4ztyk6uwr708o2pkhf847pmew52ho9d3lhenbbnzz8st8uo09o9sks5w3wr9ow52lgw9yomnxs9flye13rkdasjxpz0gndkh1so7qbio5qpvjrgofi2cihl05b0d751kd1bl3ch8psyu10xfnjo7puglkht7d7o82397eg88gdi23bmb8d8veajhcmyszt212pkkcrshwsj4ias48 == \w\k\8\6\q\k\l\a\3\d\o\o\n\k\i\6\d\z\w\4\w\z\e\y\q\u\4\r\h\v\9\r\3\h\1\c\c\d\r\f\m\w\8\a\2\y\h\b\9\c\k\s\z\u\0\l\r\p\9\3\m\a\2\g\6\e\s\k\x\3\3\l\h\v\j\j\h\3\q\p\7\7\9\q\r\h\j\l\u\1\4\c\2\v\8\k\w\a\9\v\f\t\o\y\o\n\v\k\4\u\x\w\1\0\t\w\u\k\c\9\5\x\s\m\w\j\p\j\h\n\v\d\q\w\o\u\6\3\7\9\r\x\m\r\n\b\w\4\4\b\n\y\7\i\u\4\p\2\b\k\9\z\6\y\1\4\x\6\q\n\x\q\0\h\x\q\g\5\v\d\9\9\e\x\j\4\g\6\a\6\d\m\t\z\x\v\0\1\2\6\i\o\5\8\1\b\h\c\e\9\a\m\g\8\9\5\u\b\j\0\n\k\m\p\i\i\b\l\j\6\o\5\n\2\x\w\9\9\0\o\9\k\m\3\w\b\w\v\k\k\y\0\a\s\e\2\n\j\1\m\h\x\u\j\2\l\q\f\5\i\o\5\0\k\6\w\j\k\c\b\o\t\h\3\7\v\s\c\9\q\p\7\5\o\5\s\d\o\u\r\5\h\k\4\z\t\y\k\6\u\w\r\7\0\8\o\2\p\k\h\f\8\4\7\p\m\e\w\5\2\h\o\9\d\3\l\h\e\n\b\b\n\z\z\8\s\t\8\u\o\0\9\o\9\s\k\s\5\w\3\w\r\9\o\w\5\2\l\g\w\9\y\o\m\n\x\s\9\f\l\y\e\1\3\r\k\d\a\s\j\x\p\z\0\g\n\d\k\h\1\s\o\7\q\b\i\o\5\q\p\v\j\r\g\o\f\i\2\c\i\h\l\0\5\b\0\d\7\5\1\k\d\1\b\l\3\c\h\8\p\s\y\u\1\0\x\f\n\j\o\7\p\u\g\l\k\h\t\7\d\7\o\8\2\3\9\7\e\g\8\8\g\d\i\2\3\b\m\b\8\d\8\v\e\a\j\h\c\m\y\s\z\t\2\1\2\p\k\k\c\r\s\h\w\s\j\4\i\a\s\4\8 ]] 00:25:45.230 21:22:07 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:45.230 21:22:07 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:45.489 [2024-06-07 21:22:07.922621] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:45.489 [2024-06-07 21:22:07.923073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148376 ] 00:25:45.489 [2024-06-07 21:22:08.086964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.489 [2024-06-07 21:22:08.151466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.006  Copying: 512/512 [B] (average 166 kBps) 00:25:46.006 00:25:46.006 21:22:08 -- dd/posix.sh@93 -- # [[ wk86qkla3doonki6dzw4wzeyqu4rhv9r3h1ccdrfmw8a2yhb9ckszu0lrp93ma2g6eskx33lhvjjh3qp779qrhjlu14c2v8kwa9vftoyonvk4uxw10twukc95xsmwjpjhnvdqwou6379rxmrnbw44bny7iu4p2bk9z6y14x6qnxq0hxqg5vd99exj4g6a6dmtzxv0126io581bhce9amg895ubj0nkmpiiblj6o5n2xw990o9km3wbwvkky0ase2nj1mhxuj2lqf5io50k6wjkcboth37vsc9qp75o5sdour5hk4ztyk6uwr708o2pkhf847pmew52ho9d3lhenbbnzz8st8uo09o9sks5w3wr9ow52lgw9yomnxs9flye13rkdasjxpz0gndkh1so7qbio5qpvjrgofi2cihl05b0d751kd1bl3ch8psyu10xfnjo7puglkht7d7o82397eg88gdi23bmb8d8veajhcmyszt212pkkcrshwsj4ias48 == \w\k\8\6\q\k\l\a\3\d\o\o\n\k\i\6\d\z\w\4\w\z\e\y\q\u\4\r\h\v\9\r\3\h\1\c\c\d\r\f\m\w\8\a\2\y\h\b\9\c\k\s\z\u\0\l\r\p\9\3\m\a\2\g\6\e\s\k\x\3\3\l\h\v\j\j\h\3\q\p\7\7\9\q\r\h\j\l\u\1\4\c\2\v\8\k\w\a\9\v\f\t\o\y\o\n\v\k\4\u\x\w\1\0\t\w\u\k\c\9\5\x\s\m\w\j\p\j\h\n\v\d\q\w\o\u\6\3\7\9\r\x\m\r\n\b\w\4\4\b\n\y\7\i\u\4\p\2\b\k\9\z\6\y\1\4\x\6\q\n\x\q\0\h\x\q\g\5\v\d\9\9\e\x\j\4\g\6\a\6\d\m\t\z\x\v\0\1\2\6\i\o\5\8\1\b\h\c\e\9\a\m\g\8\9\5\u\b\j\0\n\k\m\p\i\i\b\l\j\6\o\5\n\2\x\w\9\9\0\o\9\k\m\3\w\b\w\v\k\k\y\0\a\s\e\2\n\j\1\m\h\x\u\j\2\l\q\f\5\i\o\5\0\k\6\w\j\k\c\b\o\t\h\3\7\v\s\c\9\q\p\7\5\o\5\s\d\o\u\r\5\h\k\4\z\t\y\k\6\u\w\r\7\0\8\o\2\p\k\h\f\8\4\7\p\m\e\w\5\2\h\o\9\d\3\l\h\e\n\b\b\n\z\z\8\s\t\8\u\o\0\9\o\9\s\k\s\5\w\3\w\r\9\o\w\5\2\l\g\w\9\y\o\m\n\x\s\9\f\l\y\e\1\3\r\k\d\a\s\j\x\p\z\0\g\n\d\k\h\1\s\o\7\q\b\i\o\5\q\p\v\j\r\g\o\f\i\2\c\i\h\l\0\5\b\0\d\7\5\1\k\d\1\b\l\3\c\h\8\p\s\y\u\1\0\x\f\n\j\o\7\p\u\g\l\k\h\t\7\d\7\o\8\2\3\9\7\e\g\8\8\g\d\i\2\3\b\m\b\8\d\8\v\e\a\j\h\c\m\y\s\z\t\2\1\2\p\k\k\c\r\s\h\w\s\j\4\i\a\s\4\8 ]] 00:25:46.006 21:22:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:46.006 21:22:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:46.006 [2024-06-07 21:22:08.590055] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:46.007 [2024-06-07 21:22:08.590310] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148382 ] 00:25:46.266 [2024-06-07 21:22:08.757298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.266 [2024-06-07 21:22:08.830731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.833  Copying: 512/512 [B] (average 250 kBps) 00:25:46.833 00:25:46.833 21:22:09 -- dd/posix.sh@93 -- # [[ wk86qkla3doonki6dzw4wzeyqu4rhv9r3h1ccdrfmw8a2yhb9ckszu0lrp93ma2g6eskx33lhvjjh3qp779qrhjlu14c2v8kwa9vftoyonvk4uxw10twukc95xsmwjpjhnvdqwou6379rxmrnbw44bny7iu4p2bk9z6y14x6qnxq0hxqg5vd99exj4g6a6dmtzxv0126io581bhce9amg895ubj0nkmpiiblj6o5n2xw990o9km3wbwvkky0ase2nj1mhxuj2lqf5io50k6wjkcboth37vsc9qp75o5sdour5hk4ztyk6uwr708o2pkhf847pmew52ho9d3lhenbbnzz8st8uo09o9sks5w3wr9ow52lgw9yomnxs9flye13rkdasjxpz0gndkh1so7qbio5qpvjrgofi2cihl05b0d751kd1bl3ch8psyu10xfnjo7puglkht7d7o82397eg88gdi23bmb8d8veajhcmyszt212pkkcrshwsj4ias48 == \w\k\8\6\q\k\l\a\3\d\o\o\n\k\i\6\d\z\w\4\w\z\e\y\q\u\4\r\h\v\9\r\3\h\1\c\c\d\r\f\m\w\8\a\2\y\h\b\9\c\k\s\z\u\0\l\r\p\9\3\m\a\2\g\6\e\s\k\x\3\3\l\h\v\j\j\h\3\q\p\7\7\9\q\r\h\j\l\u\1\4\c\2\v\8\k\w\a\9\v\f\t\o\y\o\n\v\k\4\u\x\w\1\0\t\w\u\k\c\9\5\x\s\m\w\j\p\j\h\n\v\d\q\w\o\u\6\3\7\9\r\x\m\r\n\b\w\4\4\b\n\y\7\i\u\4\p\2\b\k\9\z\6\y\1\4\x\6\q\n\x\q\0\h\x\q\g\5\v\d\9\9\e\x\j\4\g\6\a\6\d\m\t\z\x\v\0\1\2\6\i\o\5\8\1\b\h\c\e\9\a\m\g\8\9\5\u\b\j\0\n\k\m\p\i\i\b\l\j\6\o\5\n\2\x\w\9\9\0\o\9\k\m\3\w\b\w\v\k\k\y\0\a\s\e\2\n\j\1\m\h\x\u\j\2\l\q\f\5\i\o\5\0\k\6\w\j\k\c\b\o\t\h\3\7\v\s\c\9\q\p\7\5\o\5\s\d\o\u\r\5\h\k\4\z\t\y\k\6\u\w\r\7\0\8\o\2\p\k\h\f\8\4\7\p\m\e\w\5\2\h\o\9\d\3\l\h\e\n\b\b\n\z\z\8\s\t\8\u\o\0\9\o\9\s\k\s\5\w\3\w\r\9\o\w\5\2\l\g\w\9\y\o\m\n\x\s\9\f\l\y\e\1\3\r\k\d\a\s\j\x\p\z\0\g\n\d\k\h\1\s\o\7\q\b\i\o\5\q\p\v\j\r\g\o\f\i\2\c\i\h\l\0\5\b\0\d\7\5\1\k\d\1\b\l\3\c\h\8\p\s\y\u\1\0\x\f\n\j\o\7\p\u\g\l\k\h\t\7\d\7\o\8\2\3\9\7\e\g\8\8\g\d\i\2\3\b\m\b\8\d\8\v\e\a\j\h\c\m\y\s\z\t\2\1\2\p\k\k\c\r\s\h\w\s\j\4\i\a\s\4\8 ]] 00:25:46.833 ************************************ 00:25:46.833 END TEST dd_flags_misc 00:25:46.833 ************************************ 00:25:46.833 00:25:46.833 real 0m6.006s 00:25:46.833 user 0m3.134s 00:25:46.833 sys 0m1.780s 00:25:46.833 21:22:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:46.833 21:22:09 -- common/autotest_common.sh@10 -- # set +x 00:25:46.833 21:22:09 -- dd/posix.sh@131 -- # tests_forced_aio 00:25:46.833 21:22:09 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:25:46.833 * Second test run, using AIO 00:25:46.833 21:22:09 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:25:46.833 21:22:09 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:25:46.833 21:22:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:46.833 21:22:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:46.833 21:22:09 -- common/autotest_common.sh@10 -- # set +x 00:25:46.833 ************************************ 00:25:46.833 START TEST dd_flag_append_forced_aio 00:25:46.833 ************************************ 00:25:46.833 21:22:09 -- common/autotest_common.sh@1104 -- # append 00:25:46.833 21:22:09 -- dd/posix.sh@16 -- # local dump0 00:25:46.833 21:22:09 -- dd/posix.sh@17 -- # local dump1 00:25:46.833 21:22:09 -- dd/posix.sh@19 -- # gen_bytes 32 00:25:46.833 21:22:09 -- dd/common.sh@98 -- # xtrace_disable 00:25:46.833 21:22:09 -- common/autotest_common.sh@10 -- # set +x 00:25:46.833 21:22:09 -- dd/posix.sh@19 -- # dump0=819n3jcaybr3vt2jqrp34gl9uqwn7z3p 00:25:46.833 21:22:09 -- dd/posix.sh@20 -- # gen_bytes 32 00:25:46.833 21:22:09 -- dd/common.sh@98 -- # xtrace_disable 00:25:46.833 21:22:09 -- common/autotest_common.sh@10 -- # set +x 00:25:46.833 21:22:09 -- dd/posix.sh@20 -- # dump1=wbyqkka6kpgiol0v6juw7j2e717kgxml 00:25:46.833 21:22:09 -- dd/posix.sh@22 -- # printf %s 819n3jcaybr3vt2jqrp34gl9uqwn7z3p 00:25:46.833 21:22:09 -- dd/posix.sh@23 -- # printf %s wbyqkka6kpgiol0v6juw7j2e717kgxml 00:25:46.833 21:22:09 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:25:46.833 [2024-06-07 21:22:09.348767] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:46.833 [2024-06-07 21:22:09.349068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148420 ] 00:25:47.091 [2024-06-07 21:22:09.517673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.091 [2024-06-07 21:22:09.576410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.349  Copying: 32/32 [B] (average 31 kBps) 00:25:47.349 00:25:47.349 21:22:09 -- dd/posix.sh@27 -- # [[ wbyqkka6kpgiol0v6juw7j2e717kgxml819n3jcaybr3vt2jqrp34gl9uqwn7z3p == \w\b\y\q\k\k\a\6\k\p\g\i\o\l\0\v\6\j\u\w\7\j\2\e\7\1\7\k\g\x\m\l\8\1\9\n\3\j\c\a\y\b\r\3\v\t\2\j\q\r\p\3\4\g\l\9\u\q\w\n\7\z\3\p ]] 00:25:47.349 ************************************ 00:25:47.349 END TEST dd_flag_append_forced_aio 00:25:47.349 00:25:47.349 real 0m0.639s 00:25:47.349 user 0m0.307s 00:25:47.349 sys 0m0.198s 00:25:47.349 21:22:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:47.349 21:22:09 -- common/autotest_common.sh@10 -- # set +x 00:25:47.349 ************************************ 00:25:47.349 21:22:09 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:25:47.349 21:22:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:47.349 21:22:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:47.349 21:22:09 -- common/autotest_common.sh@10 -- # set +x 00:25:47.349 ************************************ 00:25:47.349 START TEST dd_flag_directory_forced_aio 00:25:47.349 ************************************ 00:25:47.349 21:22:09 -- common/autotest_common.sh@1104 -- # directory 00:25:47.349 21:22:09 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:47.349 21:22:09 -- common/autotest_common.sh@640 -- # local es=0 00:25:47.349 21:22:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:47.349 21:22:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:47.349 21:22:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:47.349 21:22:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:47.349 21:22:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:47.349 21:22:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:47.349 21:22:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:47.349 21:22:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:47.349 21:22:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:47.349 21:22:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:47.608 [2024-06-07 21:22:10.035519] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:47.608 [2024-06-07 21:22:10.035808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148454 ] 00:25:47.608 [2024-06-07 21:22:10.202679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.608 [2024-06-07 21:22:10.275357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.866 [2024-06-07 21:22:10.361653] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:47.866 [2024-06-07 21:22:10.361751] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:47.866 [2024-06-07 21:22:10.361796] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:47.866 [2024-06-07 21:22:10.484561] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:48.125 21:22:10 -- common/autotest_common.sh@643 -- # es=236 00:25:48.125 21:22:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:48.125 21:22:10 -- common/autotest_common.sh@652 -- # es=108 00:25:48.125 21:22:10 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:48.125 21:22:10 -- common/autotest_common.sh@660 -- # es=1 00:25:48.125 21:22:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:48.125 21:22:10 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:48.125 21:22:10 -- common/autotest_common.sh@640 -- # local es=0 00:25:48.125 21:22:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:48.125 21:22:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.125 21:22:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:48.125 21:22:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.125 21:22:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:48.125 21:22:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.125 21:22:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:48.125 21:22:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.125 21:22:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:48.125 21:22:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:48.125 [2024-06-07 21:22:10.659257] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:48.125 [2024-06-07 21:22:10.659537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148466 ] 00:25:48.435 [2024-06-07 21:22:10.827170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.435 [2024-06-07 21:22:10.910127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.435 [2024-06-07 21:22:11.001003] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:48.435 [2024-06-07 21:22:11.001115] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:48.435 [2024-06-07 21:22:11.001156] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:48.716 [2024-06-07 21:22:11.126539] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:48.716 21:22:11 -- common/autotest_common.sh@643 -- # es=236 00:25:48.716 21:22:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:48.716 21:22:11 -- common/autotest_common.sh@652 -- # es=108 00:25:48.716 21:22:11 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:48.716 21:22:11 -- common/autotest_common.sh@660 -- # es=1 00:25:48.716 21:22:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:48.716 00:25:48.716 real 0m1.262s 00:25:48.716 user 0m0.663s 00:25:48.716 sys 0m0.399s 00:25:48.716 21:22:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:48.716 21:22:11 -- common/autotest_common.sh@10 -- # set +x 00:25:48.716 ************************************ 00:25:48.716 END TEST dd_flag_directory_forced_aio 00:25:48.716 ************************************ 00:25:48.716 21:22:11 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:25:48.716 21:22:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:48.716 21:22:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:48.716 21:22:11 -- common/autotest_common.sh@10 -- # set +x 00:25:48.716 ************************************ 00:25:48.716 START TEST dd_flag_nofollow_forced_aio 00:25:48.716 ************************************ 00:25:48.716 21:22:11 -- common/autotest_common.sh@1104 -- # nofollow 00:25:48.716 21:22:11 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:48.716 21:22:11 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:48.716 21:22:11 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:48.716 21:22:11 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:48.716 21:22:11 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:48.716 21:22:11 -- common/autotest_common.sh@640 -- # local es=0 00:25:48.716 21:22:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:48.716 21:22:11 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.716 21:22:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:48.716 21:22:11 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.716 21:22:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:48.716 21:22:11 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.716 21:22:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:48.716 21:22:11 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.716 21:22:11 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:48.716 21:22:11 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:48.716 [2024-06-07 21:22:11.361904] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:48.716 [2024-06-07 21:22:11.362325] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148501 ] 00:25:48.975 [2024-06-07 21:22:11.527861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.975 [2024-06-07 21:22:11.604418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.235 [2024-06-07 21:22:11.695101] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:49.235 [2024-06-07 21:22:11.695202] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:49.235 [2024-06-07 21:22:11.695257] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:49.235 [2024-06-07 21:22:11.822656] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:49.494 21:22:11 -- common/autotest_common.sh@643 -- # es=216 00:25:49.494 21:22:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:49.494 21:22:11 -- common/autotest_common.sh@652 -- # es=88 00:25:49.494 21:22:11 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:49.494 21:22:11 -- common/autotest_common.sh@660 -- # es=1 00:25:49.494 21:22:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:49.494 21:22:11 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:49.494 21:22:11 -- common/autotest_common.sh@640 -- # local es=0 00:25:49.494 21:22:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:49.494 21:22:11 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:49.494 21:22:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:49.494 21:22:11 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:49.494 21:22:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:49.494 21:22:11 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:49.494 21:22:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:49.494 21:22:11 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:49.494 21:22:11 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:49.494 21:22:11 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:49.494 [2024-06-07 21:22:11.997443] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:49.494 [2024-06-07 21:22:11.997739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148522 ] 00:25:49.494 [2024-06-07 21:22:12.167476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.752 [2024-06-07 21:22:12.243274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.752 [2024-06-07 21:22:12.330775] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:49.752 [2024-06-07 21:22:12.330877] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:49.752 [2024-06-07 21:22:12.330929] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:50.010 [2024-06-07 21:22:12.457468] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:50.010 21:22:12 -- common/autotest_common.sh@643 -- # es=216 00:25:50.010 21:22:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:50.010 21:22:12 -- common/autotest_common.sh@652 -- # es=88 00:25:50.010 21:22:12 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:50.010 21:22:12 -- common/autotest_common.sh@660 -- # es=1 00:25:50.010 21:22:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:50.010 21:22:12 -- dd/posix.sh@46 -- # gen_bytes 512 00:25:50.010 21:22:12 -- dd/common.sh@98 -- # xtrace_disable 00:25:50.010 21:22:12 -- common/autotest_common.sh@10 -- # set +x 00:25:50.010 21:22:12 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:50.011 [2024-06-07 21:22:12.615617] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:50.011 [2024-06-07 21:22:12.615867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148525 ] 00:25:50.269 [2024-06-07 21:22:12.767547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.269 [2024-06-07 21:22:12.842761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.531  Copying: 512/512 [B] (average 500 kBps) 00:25:50.531 00:25:50.788 21:22:13 -- dd/posix.sh@49 -- # [[ pjbdbytdl90jf7ol835zj6lcogi5rhwdj26ohh0qcn6k3ot544jh6kzq3cqgat1d20yjaxsfa38ch9ud00tyzl3xjbyfq0ucwj9h5si7er8hehtsepiky1n23sb5eb0phzdax1t88xzlalqmimb59u9ox467khs77m9qo9dy90nlnas62u88bhaluis0v02funa96uzxe18yx5v2iv7vi62brtwdd15qppx2g7x5rdzjqveykwt1u1mc8p9bpvoa12y8z76obho468sea74zvxwt2sngsq0hznpq9e3gqge2iofl2nu5j8hj0cqh7fou3qr0xyouw94379i3y6cd63qkjrz3n5uu0qjt64egjclqgvfwb0870zoydi14hpxyz55yj6mhrf7tytnzgyhrejyr6es91drgmheep9ct5gie4ir6x3v9t4f17l4i2atono4xw9ob7p4h13zt74g0rfr99526hq286tvw0eql8yceskujjmajn8k28rfec9lh == \p\j\b\d\b\y\t\d\l\9\0\j\f\7\o\l\8\3\5\z\j\6\l\c\o\g\i\5\r\h\w\d\j\2\6\o\h\h\0\q\c\n\6\k\3\o\t\5\4\4\j\h\6\k\z\q\3\c\q\g\a\t\1\d\2\0\y\j\a\x\s\f\a\3\8\c\h\9\u\d\0\0\t\y\z\l\3\x\j\b\y\f\q\0\u\c\w\j\9\h\5\s\i\7\e\r\8\h\e\h\t\s\e\p\i\k\y\1\n\2\3\s\b\5\e\b\0\p\h\z\d\a\x\1\t\8\8\x\z\l\a\l\q\m\i\m\b\5\9\u\9\o\x\4\6\7\k\h\s\7\7\m\9\q\o\9\d\y\9\0\n\l\n\a\s\6\2\u\8\8\b\h\a\l\u\i\s\0\v\0\2\f\u\n\a\9\6\u\z\x\e\1\8\y\x\5\v\2\i\v\7\v\i\6\2\b\r\t\w\d\d\1\5\q\p\p\x\2\g\7\x\5\r\d\z\j\q\v\e\y\k\w\t\1\u\1\m\c\8\p\9\b\p\v\o\a\1\2\y\8\z\7\6\o\b\h\o\4\6\8\s\e\a\7\4\z\v\x\w\t\2\s\n\g\s\q\0\h\z\n\p\q\9\e\3\g\q\g\e\2\i\o\f\l\2\n\u\5\j\8\h\j\0\c\q\h\7\f\o\u\3\q\r\0\x\y\o\u\w\9\4\3\7\9\i\3\y\6\c\d\6\3\q\k\j\r\z\3\n\5\u\u\0\q\j\t\6\4\e\g\j\c\l\q\g\v\f\w\b\0\8\7\0\z\o\y\d\i\1\4\h\p\x\y\z\5\5\y\j\6\m\h\r\f\7\t\y\t\n\z\g\y\h\r\e\j\y\r\6\e\s\9\1\d\r\g\m\h\e\e\p\9\c\t\5\g\i\e\4\i\r\6\x\3\v\9\t\4\f\1\7\l\4\i\2\a\t\o\n\o\4\x\w\9\o\b\7\p\4\h\1\3\z\t\7\4\g\0\r\f\r\9\9\5\2\6\h\q\2\8\6\t\v\w\0\e\q\l\8\y\c\e\s\k\u\j\j\m\a\j\n\8\k\2\8\r\f\e\c\9\l\h ]] 00:25:50.788 ************************************ 00:25:50.788 END TEST dd_flag_nofollow_forced_aio 00:25:50.788 ************************************ 00:25:50.788 00:25:50.788 real 0m1.920s 00:25:50.788 user 0m0.957s 00:25:50.788 sys 0m0.633s 00:25:50.788 21:22:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:50.788 21:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:50.788 21:22:13 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:25:50.788 21:22:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:50.788 21:22:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:50.788 21:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:50.788 ************************************ 00:25:50.788 START TEST dd_flag_noatime_forced_aio 00:25:50.788 ************************************ 00:25:50.788 21:22:13 -- common/autotest_common.sh@1104 -- # noatime 00:25:50.788 21:22:13 -- dd/posix.sh@53 -- # local atime_if 00:25:50.788 21:22:13 -- dd/posix.sh@54 -- # local atime_of 00:25:50.788 21:22:13 -- dd/posix.sh@58 -- # gen_bytes 512 00:25:50.788 21:22:13 -- dd/common.sh@98 -- # xtrace_disable 00:25:50.788 21:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:50.788 21:22:13 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:50.788 21:22:13 -- dd/posix.sh@60 -- # atime_if=1717795332 00:25:50.788 21:22:13 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:50.788 21:22:13 -- dd/posix.sh@61 -- # atime_of=1717795333 00:25:50.788 21:22:13 -- dd/posix.sh@66 -- # sleep 1 00:25:51.722 21:22:14 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:51.722 [2024-06-07 21:22:14.342814] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:51.722 [2024-06-07 21:22:14.343079] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148582 ] 00:25:51.981 [2024-06-07 21:22:14.499683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.981 [2024-06-07 21:22:14.571093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.498  Copying: 512/512 [B] (average 500 kBps) 00:25:52.498 00:25:52.498 21:22:14 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:52.498 21:22:14 -- dd/posix.sh@69 -- # (( atime_if == 1717795332 )) 00:25:52.498 21:22:14 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:52.498 21:22:14 -- dd/posix.sh@70 -- # (( atime_of == 1717795333 )) 00:25:52.498 21:22:14 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:52.498 [2024-06-07 21:22:14.994960] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:52.498 [2024-06-07 21:22:14.995203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148589 ] 00:25:52.498 [2024-06-07 21:22:15.151872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.757 [2024-06-07 21:22:15.216181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.015  Copying: 512/512 [B] (average 500 kBps) 00:25:53.015 00:25:53.015 21:22:15 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:53.015 21:22:15 -- dd/posix.sh@73 -- # (( atime_if < 1717795335 )) 00:25:53.015 00:25:53.015 real 0m2.311s 00:25:53.015 user 0m0.658s 00:25:53.015 sys 0m0.387s 00:25:53.015 21:22:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:53.015 21:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:53.015 ************************************ 00:25:53.015 END TEST dd_flag_noatime_forced_aio 00:25:53.015 ************************************ 00:25:53.015 21:22:15 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:25:53.015 21:22:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:53.015 21:22:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:53.015 21:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:53.015 ************************************ 00:25:53.015 START TEST dd_flags_misc_forced_aio 00:25:53.015 ************************************ 00:25:53.015 21:22:15 -- common/autotest_common.sh@1104 -- # io 00:25:53.015 21:22:15 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:25:53.015 21:22:15 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:25:53.015 21:22:15 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:25:53.015 21:22:15 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:53.015 21:22:15 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:53.015 21:22:15 -- dd/common.sh@98 -- # xtrace_disable 00:25:53.015 21:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:53.015 21:22:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:53.015 21:22:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:53.274 [2024-06-07 21:22:15.701476] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:53.274 [2024-06-07 21:22:15.701706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148625 ] 00:25:53.274 [2024-06-07 21:22:15.865979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.533 [2024-06-07 21:22:15.953549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.792  Copying: 512/512 [B] (average 500 kBps) 00:25:53.792 00:25:53.792 21:22:16 -- dd/posix.sh@93 -- # [[ 0wz6klm1kzqmz9b1ehek05lnyqsglriytxrewh6ltzarz9debgmvabe292qvp1zzpgoj41lut0wcvibocg3u5m31ivzc0cz7xh5raj55xdzncxpqhasi1eit41492m49fici6v0l4ppcfi1go92or24zsq2temwvqexenl162wrbz1ujdl8fexxo8zpa16fnqhguei2ckyys0zuw0il8cwr7kiucddrzo9tnmxewvd725h12xecxaq0ka2i635ulmx1l1gt1z7rj4ak2uoufy0cyq7xe8ugqryl67627yzalwyywyxirhfilh7ky9ldk5cjy1q86paijfhodazz7w0ctph5nwjr46ayukk53qg9vrzb5536cyuxw9wiima4umz5bkzabgyc3jqxqbkxpyjxg1yl19p2xamhkzvdorzruzpg2phlk7decl7qzf21yhsa22g0ii5v36q9zuooihvz81lq9i9d02trgs03jhfrg15y6khbsl3cmu0sf66b3 == \0\w\z\6\k\l\m\1\k\z\q\m\z\9\b\1\e\h\e\k\0\5\l\n\y\q\s\g\l\r\i\y\t\x\r\e\w\h\6\l\t\z\a\r\z\9\d\e\b\g\m\v\a\b\e\2\9\2\q\v\p\1\z\z\p\g\o\j\4\1\l\u\t\0\w\c\v\i\b\o\c\g\3\u\5\m\3\1\i\v\z\c\0\c\z\7\x\h\5\r\a\j\5\5\x\d\z\n\c\x\p\q\h\a\s\i\1\e\i\t\4\1\4\9\2\m\4\9\f\i\c\i\6\v\0\l\4\p\p\c\f\i\1\g\o\9\2\o\r\2\4\z\s\q\2\t\e\m\w\v\q\e\x\e\n\l\1\6\2\w\r\b\z\1\u\j\d\l\8\f\e\x\x\o\8\z\p\a\1\6\f\n\q\h\g\u\e\i\2\c\k\y\y\s\0\z\u\w\0\i\l\8\c\w\r\7\k\i\u\c\d\d\r\z\o\9\t\n\m\x\e\w\v\d\7\2\5\h\1\2\x\e\c\x\a\q\0\k\a\2\i\6\3\5\u\l\m\x\1\l\1\g\t\1\z\7\r\j\4\a\k\2\u\o\u\f\y\0\c\y\q\7\x\e\8\u\g\q\r\y\l\6\7\6\2\7\y\z\a\l\w\y\y\w\y\x\i\r\h\f\i\l\h\7\k\y\9\l\d\k\5\c\j\y\1\q\8\6\p\a\i\j\f\h\o\d\a\z\z\7\w\0\c\t\p\h\5\n\w\j\r\4\6\a\y\u\k\k\5\3\q\g\9\v\r\z\b\5\5\3\6\c\y\u\x\w\9\w\i\i\m\a\4\u\m\z\5\b\k\z\a\b\g\y\c\3\j\q\x\q\b\k\x\p\y\j\x\g\1\y\l\1\9\p\2\x\a\m\h\k\z\v\d\o\r\z\r\u\z\p\g\2\p\h\l\k\7\d\e\c\l\7\q\z\f\2\1\y\h\s\a\2\2\g\0\i\i\5\v\3\6\q\9\z\u\o\o\i\h\v\z\8\1\l\q\9\i\9\d\0\2\t\r\g\s\0\3\j\h\f\r\g\1\5\y\6\k\h\b\s\l\3\c\m\u\0\s\f\6\6\b\3 ]] 00:25:53.792 21:22:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:53.792 21:22:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:53.792 [2024-06-07 21:22:16.378731] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:53.792 [2024-06-07 21:22:16.378993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148659 ] 00:25:54.051 [2024-06-07 21:22:16.546720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.051 [2024-06-07 21:22:16.616311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.310  Copying: 512/512 [B] (average 500 kBps) 00:25:54.310 00:25:54.569 21:22:16 -- dd/posix.sh@93 -- # [[ 0wz6klm1kzqmz9b1ehek05lnyqsglriytxrewh6ltzarz9debgmvabe292qvp1zzpgoj41lut0wcvibocg3u5m31ivzc0cz7xh5raj55xdzncxpqhasi1eit41492m49fici6v0l4ppcfi1go92or24zsq2temwvqexenl162wrbz1ujdl8fexxo8zpa16fnqhguei2ckyys0zuw0il8cwr7kiucddrzo9tnmxewvd725h12xecxaq0ka2i635ulmx1l1gt1z7rj4ak2uoufy0cyq7xe8ugqryl67627yzalwyywyxirhfilh7ky9ldk5cjy1q86paijfhodazz7w0ctph5nwjr46ayukk53qg9vrzb5536cyuxw9wiima4umz5bkzabgyc3jqxqbkxpyjxg1yl19p2xamhkzvdorzruzpg2phlk7decl7qzf21yhsa22g0ii5v36q9zuooihvz81lq9i9d02trgs03jhfrg15y6khbsl3cmu0sf66b3 == \0\w\z\6\k\l\m\1\k\z\q\m\z\9\b\1\e\h\e\k\0\5\l\n\y\q\s\g\l\r\i\y\t\x\r\e\w\h\6\l\t\z\a\r\z\9\d\e\b\g\m\v\a\b\e\2\9\2\q\v\p\1\z\z\p\g\o\j\4\1\l\u\t\0\w\c\v\i\b\o\c\g\3\u\5\m\3\1\i\v\z\c\0\c\z\7\x\h\5\r\a\j\5\5\x\d\z\n\c\x\p\q\h\a\s\i\1\e\i\t\4\1\4\9\2\m\4\9\f\i\c\i\6\v\0\l\4\p\p\c\f\i\1\g\o\9\2\o\r\2\4\z\s\q\2\t\e\m\w\v\q\e\x\e\n\l\1\6\2\w\r\b\z\1\u\j\d\l\8\f\e\x\x\o\8\z\p\a\1\6\f\n\q\h\g\u\e\i\2\c\k\y\y\s\0\z\u\w\0\i\l\8\c\w\r\7\k\i\u\c\d\d\r\z\o\9\t\n\m\x\e\w\v\d\7\2\5\h\1\2\x\e\c\x\a\q\0\k\a\2\i\6\3\5\u\l\m\x\1\l\1\g\t\1\z\7\r\j\4\a\k\2\u\o\u\f\y\0\c\y\q\7\x\e\8\u\g\q\r\y\l\6\7\6\2\7\y\z\a\l\w\y\y\w\y\x\i\r\h\f\i\l\h\7\k\y\9\l\d\k\5\c\j\y\1\q\8\6\p\a\i\j\f\h\o\d\a\z\z\7\w\0\c\t\p\h\5\n\w\j\r\4\6\a\y\u\k\k\5\3\q\g\9\v\r\z\b\5\5\3\6\c\y\u\x\w\9\w\i\i\m\a\4\u\m\z\5\b\k\z\a\b\g\y\c\3\j\q\x\q\b\k\x\p\y\j\x\g\1\y\l\1\9\p\2\x\a\m\h\k\z\v\d\o\r\z\r\u\z\p\g\2\p\h\l\k\7\d\e\c\l\7\q\z\f\2\1\y\h\s\a\2\2\g\0\i\i\5\v\3\6\q\9\z\u\o\o\i\h\v\z\8\1\l\q\9\i\9\d\0\2\t\r\g\s\0\3\j\h\f\r\g\1\5\y\6\k\h\b\s\l\3\c\m\u\0\s\f\6\6\b\3 ]] 00:25:54.569 21:22:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:54.569 21:22:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:54.569 [2024-06-07 21:22:17.040094] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:54.569 [2024-06-07 21:22:17.040320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148671 ] 00:25:54.569 [2024-06-07 21:22:17.206528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.827 [2024-06-07 21:22:17.270622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.086  Copying: 512/512 [B] (average 250 kBps) 00:25:55.086 00:25:55.086 21:22:17 -- dd/posix.sh@93 -- # [[ 0wz6klm1kzqmz9b1ehek05lnyqsglriytxrewh6ltzarz9debgmvabe292qvp1zzpgoj41lut0wcvibocg3u5m31ivzc0cz7xh5raj55xdzncxpqhasi1eit41492m49fici6v0l4ppcfi1go92or24zsq2temwvqexenl162wrbz1ujdl8fexxo8zpa16fnqhguei2ckyys0zuw0il8cwr7kiucddrzo9tnmxewvd725h12xecxaq0ka2i635ulmx1l1gt1z7rj4ak2uoufy0cyq7xe8ugqryl67627yzalwyywyxirhfilh7ky9ldk5cjy1q86paijfhodazz7w0ctph5nwjr46ayukk53qg9vrzb5536cyuxw9wiima4umz5bkzabgyc3jqxqbkxpyjxg1yl19p2xamhkzvdorzruzpg2phlk7decl7qzf21yhsa22g0ii5v36q9zuooihvz81lq9i9d02trgs03jhfrg15y6khbsl3cmu0sf66b3 == \0\w\z\6\k\l\m\1\k\z\q\m\z\9\b\1\e\h\e\k\0\5\l\n\y\q\s\g\l\r\i\y\t\x\r\e\w\h\6\l\t\z\a\r\z\9\d\e\b\g\m\v\a\b\e\2\9\2\q\v\p\1\z\z\p\g\o\j\4\1\l\u\t\0\w\c\v\i\b\o\c\g\3\u\5\m\3\1\i\v\z\c\0\c\z\7\x\h\5\r\a\j\5\5\x\d\z\n\c\x\p\q\h\a\s\i\1\e\i\t\4\1\4\9\2\m\4\9\f\i\c\i\6\v\0\l\4\p\p\c\f\i\1\g\o\9\2\o\r\2\4\z\s\q\2\t\e\m\w\v\q\e\x\e\n\l\1\6\2\w\r\b\z\1\u\j\d\l\8\f\e\x\x\o\8\z\p\a\1\6\f\n\q\h\g\u\e\i\2\c\k\y\y\s\0\z\u\w\0\i\l\8\c\w\r\7\k\i\u\c\d\d\r\z\o\9\t\n\m\x\e\w\v\d\7\2\5\h\1\2\x\e\c\x\a\q\0\k\a\2\i\6\3\5\u\l\m\x\1\l\1\g\t\1\z\7\r\j\4\a\k\2\u\o\u\f\y\0\c\y\q\7\x\e\8\u\g\q\r\y\l\6\7\6\2\7\y\z\a\l\w\y\y\w\y\x\i\r\h\f\i\l\h\7\k\y\9\l\d\k\5\c\j\y\1\q\8\6\p\a\i\j\f\h\o\d\a\z\z\7\w\0\c\t\p\h\5\n\w\j\r\4\6\a\y\u\k\k\5\3\q\g\9\v\r\z\b\5\5\3\6\c\y\u\x\w\9\w\i\i\m\a\4\u\m\z\5\b\k\z\a\b\g\y\c\3\j\q\x\q\b\k\x\p\y\j\x\g\1\y\l\1\9\p\2\x\a\m\h\k\z\v\d\o\r\z\r\u\z\p\g\2\p\h\l\k\7\d\e\c\l\7\q\z\f\2\1\y\h\s\a\2\2\g\0\i\i\5\v\3\6\q\9\z\u\o\o\i\h\v\z\8\1\l\q\9\i\9\d\0\2\t\r\g\s\0\3\j\h\f\r\g\1\5\y\6\k\h\b\s\l\3\c\m\u\0\s\f\6\6\b\3 ]] 00:25:55.086 21:22:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:55.086 21:22:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:55.086 [2024-06-07 21:22:17.712947] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:55.086 [2024-06-07 21:22:17.713225] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148683 ] 00:25:55.346 [2024-06-07 21:22:17.880714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.346 [2024-06-07 21:22:17.956546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.864  Copying: 512/512 [B] (average 125 kBps) 00:25:55.864 00:25:55.864 21:22:18 -- dd/posix.sh@93 -- # [[ 0wz6klm1kzqmz9b1ehek05lnyqsglriytxrewh6ltzarz9debgmvabe292qvp1zzpgoj41lut0wcvibocg3u5m31ivzc0cz7xh5raj55xdzncxpqhasi1eit41492m49fici6v0l4ppcfi1go92or24zsq2temwvqexenl162wrbz1ujdl8fexxo8zpa16fnqhguei2ckyys0zuw0il8cwr7kiucddrzo9tnmxewvd725h12xecxaq0ka2i635ulmx1l1gt1z7rj4ak2uoufy0cyq7xe8ugqryl67627yzalwyywyxirhfilh7ky9ldk5cjy1q86paijfhodazz7w0ctph5nwjr46ayukk53qg9vrzb5536cyuxw9wiima4umz5bkzabgyc3jqxqbkxpyjxg1yl19p2xamhkzvdorzruzpg2phlk7decl7qzf21yhsa22g0ii5v36q9zuooihvz81lq9i9d02trgs03jhfrg15y6khbsl3cmu0sf66b3 == \0\w\z\6\k\l\m\1\k\z\q\m\z\9\b\1\e\h\e\k\0\5\l\n\y\q\s\g\l\r\i\y\t\x\r\e\w\h\6\l\t\z\a\r\z\9\d\e\b\g\m\v\a\b\e\2\9\2\q\v\p\1\z\z\p\g\o\j\4\1\l\u\t\0\w\c\v\i\b\o\c\g\3\u\5\m\3\1\i\v\z\c\0\c\z\7\x\h\5\r\a\j\5\5\x\d\z\n\c\x\p\q\h\a\s\i\1\e\i\t\4\1\4\9\2\m\4\9\f\i\c\i\6\v\0\l\4\p\p\c\f\i\1\g\o\9\2\o\r\2\4\z\s\q\2\t\e\m\w\v\q\e\x\e\n\l\1\6\2\w\r\b\z\1\u\j\d\l\8\f\e\x\x\o\8\z\p\a\1\6\f\n\q\h\g\u\e\i\2\c\k\y\y\s\0\z\u\w\0\i\l\8\c\w\r\7\k\i\u\c\d\d\r\z\o\9\t\n\m\x\e\w\v\d\7\2\5\h\1\2\x\e\c\x\a\q\0\k\a\2\i\6\3\5\u\l\m\x\1\l\1\g\t\1\z\7\r\j\4\a\k\2\u\o\u\f\y\0\c\y\q\7\x\e\8\u\g\q\r\y\l\6\7\6\2\7\y\z\a\l\w\y\y\w\y\x\i\r\h\f\i\l\h\7\k\y\9\l\d\k\5\c\j\y\1\q\8\6\p\a\i\j\f\h\o\d\a\z\z\7\w\0\c\t\p\h\5\n\w\j\r\4\6\a\y\u\k\k\5\3\q\g\9\v\r\z\b\5\5\3\6\c\y\u\x\w\9\w\i\i\m\a\4\u\m\z\5\b\k\z\a\b\g\y\c\3\j\q\x\q\b\k\x\p\y\j\x\g\1\y\l\1\9\p\2\x\a\m\h\k\z\v\d\o\r\z\r\u\z\p\g\2\p\h\l\k\7\d\e\c\l\7\q\z\f\2\1\y\h\s\a\2\2\g\0\i\i\5\v\3\6\q\9\z\u\o\o\i\h\v\z\8\1\l\q\9\i\9\d\0\2\t\r\g\s\0\3\j\h\f\r\g\1\5\y\6\k\h\b\s\l\3\c\m\u\0\s\f\6\6\b\3 ]] 00:25:55.864 21:22:18 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:55.864 21:22:18 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:55.864 21:22:18 -- dd/common.sh@98 -- # xtrace_disable 00:25:55.864 21:22:18 -- common/autotest_common.sh@10 -- # set +x 00:25:55.864 21:22:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:55.864 21:22:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:55.864 [2024-06-07 21:22:18.404363] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:55.864 [2024-06-07 21:22:18.404624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148700 ] 00:25:56.123 [2024-06-07 21:22:18.570445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.123 [2024-06-07 21:22:18.629823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.381  Copying: 512/512 [B] (average 500 kBps) 00:25:56.381 00:25:56.381 21:22:19 -- dd/posix.sh@93 -- # [[ ykru6bb9p6hrbom7d5s1qq41kuws4pauvjyau3n8nw8ko5l42rn6gkvezhtfug0dqkix9q2nt4vde6xw24x3su1pv0f6eqr5ndzrglo4wyq871m39qaqs95k6t6tevf8n5x6rdwfme1a4y4l14zmhp7dlz9hxga04hu8xwzc1br0xcx4208apgdug0o6bd8ev3ixcv5n1r7nhs1yxftsecy911yqsvm0o2b73ox3vl0x2f59ok59xv3g1ocjizm68w7qadg4f5farz92e8ji1nzwucfzgv7a63f64ow5wnj83hyyfzgdk6rh5wvr0punb4gwa1e1lbamov2jtqc3p3t4ebg26d3o9ibspyjazrj6b8ftot8rzjmtoxrh423wp7d18j0qf0vzotxtv7x36s1gzuwv5cm94jvg83wr5buqpvtt6eqqdyrvx42ktvblk288w0u5kgg9742df5qp0xnvpgac1721yhmg6g6p9re9ou7ryagts5vnhdfetwdv == \y\k\r\u\6\b\b\9\p\6\h\r\b\o\m\7\d\5\s\1\q\q\4\1\k\u\w\s\4\p\a\u\v\j\y\a\u\3\n\8\n\w\8\k\o\5\l\4\2\r\n\6\g\k\v\e\z\h\t\f\u\g\0\d\q\k\i\x\9\q\2\n\t\4\v\d\e\6\x\w\2\4\x\3\s\u\1\p\v\0\f\6\e\q\r\5\n\d\z\r\g\l\o\4\w\y\q\8\7\1\m\3\9\q\a\q\s\9\5\k\6\t\6\t\e\v\f\8\n\5\x\6\r\d\w\f\m\e\1\a\4\y\4\l\1\4\z\m\h\p\7\d\l\z\9\h\x\g\a\0\4\h\u\8\x\w\z\c\1\b\r\0\x\c\x\4\2\0\8\a\p\g\d\u\g\0\o\6\b\d\8\e\v\3\i\x\c\v\5\n\1\r\7\n\h\s\1\y\x\f\t\s\e\c\y\9\1\1\y\q\s\v\m\0\o\2\b\7\3\o\x\3\v\l\0\x\2\f\5\9\o\k\5\9\x\v\3\g\1\o\c\j\i\z\m\6\8\w\7\q\a\d\g\4\f\5\f\a\r\z\9\2\e\8\j\i\1\n\z\w\u\c\f\z\g\v\7\a\6\3\f\6\4\o\w\5\w\n\j\8\3\h\y\y\f\z\g\d\k\6\r\h\5\w\v\r\0\p\u\n\b\4\g\w\a\1\e\1\l\b\a\m\o\v\2\j\t\q\c\3\p\3\t\4\e\b\g\2\6\d\3\o\9\i\b\s\p\y\j\a\z\r\j\6\b\8\f\t\o\t\8\r\z\j\m\t\o\x\r\h\4\2\3\w\p\7\d\1\8\j\0\q\f\0\v\z\o\t\x\t\v\7\x\3\6\s\1\g\z\u\w\v\5\c\m\9\4\j\v\g\8\3\w\r\5\b\u\q\p\v\t\t\6\e\q\q\d\y\r\v\x\4\2\k\t\v\b\l\k\2\8\8\w\0\u\5\k\g\g\9\7\4\2\d\f\5\q\p\0\x\n\v\p\g\a\c\1\7\2\1\y\h\m\g\6\g\6\p\9\r\e\9\o\u\7\r\y\a\g\t\s\5\v\n\h\d\f\e\t\w\d\v ]] 00:25:56.381 21:22:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:56.381 21:22:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:56.381 [2024-06-07 21:22:19.054624] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:56.381 [2024-06-07 21:22:19.054897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148705 ] 00:25:56.639 [2024-06-07 21:22:19.223389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.639 [2024-06-07 21:22:19.303846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.155  Copying: 512/512 [B] (average 500 kBps) 00:25:57.155 00:25:57.155 21:22:19 -- dd/posix.sh@93 -- # [[ ykru6bb9p6hrbom7d5s1qq41kuws4pauvjyau3n8nw8ko5l42rn6gkvezhtfug0dqkix9q2nt4vde6xw24x3su1pv0f6eqr5ndzrglo4wyq871m39qaqs95k6t6tevf8n5x6rdwfme1a4y4l14zmhp7dlz9hxga04hu8xwzc1br0xcx4208apgdug0o6bd8ev3ixcv5n1r7nhs1yxftsecy911yqsvm0o2b73ox3vl0x2f59ok59xv3g1ocjizm68w7qadg4f5farz92e8ji1nzwucfzgv7a63f64ow5wnj83hyyfzgdk6rh5wvr0punb4gwa1e1lbamov2jtqc3p3t4ebg26d3o9ibspyjazrj6b8ftot8rzjmtoxrh423wp7d18j0qf0vzotxtv7x36s1gzuwv5cm94jvg83wr5buqpvtt6eqqdyrvx42ktvblk288w0u5kgg9742df5qp0xnvpgac1721yhmg6g6p9re9ou7ryagts5vnhdfetwdv == \y\k\r\u\6\b\b\9\p\6\h\r\b\o\m\7\d\5\s\1\q\q\4\1\k\u\w\s\4\p\a\u\v\j\y\a\u\3\n\8\n\w\8\k\o\5\l\4\2\r\n\6\g\k\v\e\z\h\t\f\u\g\0\d\q\k\i\x\9\q\2\n\t\4\v\d\e\6\x\w\2\4\x\3\s\u\1\p\v\0\f\6\e\q\r\5\n\d\z\r\g\l\o\4\w\y\q\8\7\1\m\3\9\q\a\q\s\9\5\k\6\t\6\t\e\v\f\8\n\5\x\6\r\d\w\f\m\e\1\a\4\y\4\l\1\4\z\m\h\p\7\d\l\z\9\h\x\g\a\0\4\h\u\8\x\w\z\c\1\b\r\0\x\c\x\4\2\0\8\a\p\g\d\u\g\0\o\6\b\d\8\e\v\3\i\x\c\v\5\n\1\r\7\n\h\s\1\y\x\f\t\s\e\c\y\9\1\1\y\q\s\v\m\0\o\2\b\7\3\o\x\3\v\l\0\x\2\f\5\9\o\k\5\9\x\v\3\g\1\o\c\j\i\z\m\6\8\w\7\q\a\d\g\4\f\5\f\a\r\z\9\2\e\8\j\i\1\n\z\w\u\c\f\z\g\v\7\a\6\3\f\6\4\o\w\5\w\n\j\8\3\h\y\y\f\z\g\d\k\6\r\h\5\w\v\r\0\p\u\n\b\4\g\w\a\1\e\1\l\b\a\m\o\v\2\j\t\q\c\3\p\3\t\4\e\b\g\2\6\d\3\o\9\i\b\s\p\y\j\a\z\r\j\6\b\8\f\t\o\t\8\r\z\j\m\t\o\x\r\h\4\2\3\w\p\7\d\1\8\j\0\q\f\0\v\z\o\t\x\t\v\7\x\3\6\s\1\g\z\u\w\v\5\c\m\9\4\j\v\g\8\3\w\r\5\b\u\q\p\v\t\t\6\e\q\q\d\y\r\v\x\4\2\k\t\v\b\l\k\2\8\8\w\0\u\5\k\g\g\9\7\4\2\d\f\5\q\p\0\x\n\v\p\g\a\c\1\7\2\1\y\h\m\g\6\g\6\p\9\r\e\9\o\u\7\r\y\a\g\t\s\5\v\n\h\d\f\e\t\w\d\v ]] 00:25:57.155 21:22:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:57.155 21:22:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:57.155 [2024-06-07 21:22:19.736219] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:57.155 [2024-06-07 21:22:19.736499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148722 ] 00:25:57.413 [2024-06-07 21:22:19.901792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.413 [2024-06-07 21:22:19.966637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.671  Copying: 512/512 [B] (average 166 kBps) 00:25:57.671 00:25:57.931 21:22:20 -- dd/posix.sh@93 -- # [[ ykru6bb9p6hrbom7d5s1qq41kuws4pauvjyau3n8nw8ko5l42rn6gkvezhtfug0dqkix9q2nt4vde6xw24x3su1pv0f6eqr5ndzrglo4wyq871m39qaqs95k6t6tevf8n5x6rdwfme1a4y4l14zmhp7dlz9hxga04hu8xwzc1br0xcx4208apgdug0o6bd8ev3ixcv5n1r7nhs1yxftsecy911yqsvm0o2b73ox3vl0x2f59ok59xv3g1ocjizm68w7qadg4f5farz92e8ji1nzwucfzgv7a63f64ow5wnj83hyyfzgdk6rh5wvr0punb4gwa1e1lbamov2jtqc3p3t4ebg26d3o9ibspyjazrj6b8ftot8rzjmtoxrh423wp7d18j0qf0vzotxtv7x36s1gzuwv5cm94jvg83wr5buqpvtt6eqqdyrvx42ktvblk288w0u5kgg9742df5qp0xnvpgac1721yhmg6g6p9re9ou7ryagts5vnhdfetwdv == \y\k\r\u\6\b\b\9\p\6\h\r\b\o\m\7\d\5\s\1\q\q\4\1\k\u\w\s\4\p\a\u\v\j\y\a\u\3\n\8\n\w\8\k\o\5\l\4\2\r\n\6\g\k\v\e\z\h\t\f\u\g\0\d\q\k\i\x\9\q\2\n\t\4\v\d\e\6\x\w\2\4\x\3\s\u\1\p\v\0\f\6\e\q\r\5\n\d\z\r\g\l\o\4\w\y\q\8\7\1\m\3\9\q\a\q\s\9\5\k\6\t\6\t\e\v\f\8\n\5\x\6\r\d\w\f\m\e\1\a\4\y\4\l\1\4\z\m\h\p\7\d\l\z\9\h\x\g\a\0\4\h\u\8\x\w\z\c\1\b\r\0\x\c\x\4\2\0\8\a\p\g\d\u\g\0\o\6\b\d\8\e\v\3\i\x\c\v\5\n\1\r\7\n\h\s\1\y\x\f\t\s\e\c\y\9\1\1\y\q\s\v\m\0\o\2\b\7\3\o\x\3\v\l\0\x\2\f\5\9\o\k\5\9\x\v\3\g\1\o\c\j\i\z\m\6\8\w\7\q\a\d\g\4\f\5\f\a\r\z\9\2\e\8\j\i\1\n\z\w\u\c\f\z\g\v\7\a\6\3\f\6\4\o\w\5\w\n\j\8\3\h\y\y\f\z\g\d\k\6\r\h\5\w\v\r\0\p\u\n\b\4\g\w\a\1\e\1\l\b\a\m\o\v\2\j\t\q\c\3\p\3\t\4\e\b\g\2\6\d\3\o\9\i\b\s\p\y\j\a\z\r\j\6\b\8\f\t\o\t\8\r\z\j\m\t\o\x\r\h\4\2\3\w\p\7\d\1\8\j\0\q\f\0\v\z\o\t\x\t\v\7\x\3\6\s\1\g\z\u\w\v\5\c\m\9\4\j\v\g\8\3\w\r\5\b\u\q\p\v\t\t\6\e\q\q\d\y\r\v\x\4\2\k\t\v\b\l\k\2\8\8\w\0\u\5\k\g\g\9\7\4\2\d\f\5\q\p\0\x\n\v\p\g\a\c\1\7\2\1\y\h\m\g\6\g\6\p\9\r\e\9\o\u\7\r\y\a\g\t\s\5\v\n\h\d\f\e\t\w\d\v ]] 00:25:57.931 21:22:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:57.931 21:22:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:57.931 [2024-06-07 21:22:20.400205] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:57.931 [2024-06-07 21:22:20.400498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148738 ] 00:25:57.931 [2024-06-07 21:22:20.566645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.189 [2024-06-07 21:22:20.638145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.448  Copying: 512/512 [B] (average 250 kBps) 00:25:58.448 00:25:58.448 21:22:21 -- dd/posix.sh@93 -- # [[ ykru6bb9p6hrbom7d5s1qq41kuws4pauvjyau3n8nw8ko5l42rn6gkvezhtfug0dqkix9q2nt4vde6xw24x3su1pv0f6eqr5ndzrglo4wyq871m39qaqs95k6t6tevf8n5x6rdwfme1a4y4l14zmhp7dlz9hxga04hu8xwzc1br0xcx4208apgdug0o6bd8ev3ixcv5n1r7nhs1yxftsecy911yqsvm0o2b73ox3vl0x2f59ok59xv3g1ocjizm68w7qadg4f5farz92e8ji1nzwucfzgv7a63f64ow5wnj83hyyfzgdk6rh5wvr0punb4gwa1e1lbamov2jtqc3p3t4ebg26d3o9ibspyjazrj6b8ftot8rzjmtoxrh423wp7d18j0qf0vzotxtv7x36s1gzuwv5cm94jvg83wr5buqpvtt6eqqdyrvx42ktvblk288w0u5kgg9742df5qp0xnvpgac1721yhmg6g6p9re9ou7ryagts5vnhdfetwdv == \y\k\r\u\6\b\b\9\p\6\h\r\b\o\m\7\d\5\s\1\q\q\4\1\k\u\w\s\4\p\a\u\v\j\y\a\u\3\n\8\n\w\8\k\o\5\l\4\2\r\n\6\g\k\v\e\z\h\t\f\u\g\0\d\q\k\i\x\9\q\2\n\t\4\v\d\e\6\x\w\2\4\x\3\s\u\1\p\v\0\f\6\e\q\r\5\n\d\z\r\g\l\o\4\w\y\q\8\7\1\m\3\9\q\a\q\s\9\5\k\6\t\6\t\e\v\f\8\n\5\x\6\r\d\w\f\m\e\1\a\4\y\4\l\1\4\z\m\h\p\7\d\l\z\9\h\x\g\a\0\4\h\u\8\x\w\z\c\1\b\r\0\x\c\x\4\2\0\8\a\p\g\d\u\g\0\o\6\b\d\8\e\v\3\i\x\c\v\5\n\1\r\7\n\h\s\1\y\x\f\t\s\e\c\y\9\1\1\y\q\s\v\m\0\o\2\b\7\3\o\x\3\v\l\0\x\2\f\5\9\o\k\5\9\x\v\3\g\1\o\c\j\i\z\m\6\8\w\7\q\a\d\g\4\f\5\f\a\r\z\9\2\e\8\j\i\1\n\z\w\u\c\f\z\g\v\7\a\6\3\f\6\4\o\w\5\w\n\j\8\3\h\y\y\f\z\g\d\k\6\r\h\5\w\v\r\0\p\u\n\b\4\g\w\a\1\e\1\l\b\a\m\o\v\2\j\t\q\c\3\p\3\t\4\e\b\g\2\6\d\3\o\9\i\b\s\p\y\j\a\z\r\j\6\b\8\f\t\o\t\8\r\z\j\m\t\o\x\r\h\4\2\3\w\p\7\d\1\8\j\0\q\f\0\v\z\o\t\x\t\v\7\x\3\6\s\1\g\z\u\w\v\5\c\m\9\4\j\v\g\8\3\w\r\5\b\u\q\p\v\t\t\6\e\q\q\d\y\r\v\x\4\2\k\t\v\b\l\k\2\8\8\w\0\u\5\k\g\g\9\7\4\2\d\f\5\q\p\0\x\n\v\p\g\a\c\1\7\2\1\y\h\m\g\6\g\6\p\9\r\e\9\o\u\7\r\y\a\g\t\s\5\v\n\h\d\f\e\t\w\d\v ]] 00:25:58.448 ************************************ 00:25:58.448 END TEST dd_flags_misc_forced_aio 00:25:58.448 ************************************ 00:25:58.448 00:25:58.448 real 0m5.361s 00:25:58.448 user 0m2.616s 00:25:58.448 sys 0m1.644s 00:25:58.448 21:22:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:58.448 21:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:58.448 21:22:21 -- dd/posix.sh@1 -- # cleanup 00:25:58.448 21:22:21 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:58.448 21:22:21 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:58.448 00:25:58.448 real 0m26.088s 00:25:58.448 user 0m12.527s 00:25:58.448 sys 0m7.424s 00:25:58.448 21:22:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:58.448 21:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:58.448 ************************************ 00:25:58.448 END TEST spdk_dd_posix 00:25:58.448 ************************************ 00:25:58.448 21:22:21 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:25:58.448 21:22:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:58.448 21:22:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:58.448 21:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:58.448 ************************************ 00:25:58.448 START TEST spdk_dd_malloc 00:25:58.448 ************************************ 00:25:58.448 21:22:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:25:58.706 * Looking for test storage... 00:25:58.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:58.706 21:22:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:58.706 21:22:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.706 21:22:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.706 21:22:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.706 21:22:21 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:58.706 21:22:21 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:58.706 21:22:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:58.706 21:22:21 -- paths/export.sh@5 -- # export PATH 00:25:58.706 21:22:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:58.706 21:22:21 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:25:58.706 21:22:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:58.706 21:22:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:58.706 21:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:58.706 ************************************ 00:25:58.706 START TEST dd_malloc_copy 00:25:58.706 ************************************ 00:25:58.706 21:22:21 -- common/autotest_common.sh@1104 -- # malloc_copy 00:25:58.706 21:22:21 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:25:58.706 21:22:21 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:25:58.706 21:22:21 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:25:58.706 21:22:21 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:25:58.706 21:22:21 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:25:58.706 21:22:21 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:25:58.706 21:22:21 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:25:58.706 21:22:21 -- dd/malloc.sh@28 -- # gen_conf 00:25:58.706 21:22:21 -- dd/common.sh@31 -- # xtrace_disable 00:25:58.706 21:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:58.706 [2024-06-07 21:22:21.234255] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:58.706 [2024-06-07 21:22:21.234499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148816 ] 00:25:58.706 { 00:25:58.707 "subsystems": [ 00:25:58.707 { 00:25:58.707 "subsystem": "bdev", 00:25:58.707 "config": [ 00:25:58.707 { 00:25:58.707 "params": { 00:25:58.707 "num_blocks": 1048576, 00:25:58.707 "block_size": 512, 00:25:58.707 "name": "malloc0" 00:25:58.707 }, 00:25:58.707 "method": "bdev_malloc_create" 00:25:58.707 }, 00:25:58.707 { 00:25:58.707 "params": { 00:25:58.707 "num_blocks": 1048576, 00:25:58.707 "block_size": 512, 00:25:58.707 "name": "malloc1" 00:25:58.707 }, 00:25:58.707 "method": "bdev_malloc_create" 00:25:58.707 }, 00:25:58.707 { 00:25:58.707 "method": "bdev_wait_for_examine" 00:25:58.707 } 00:25:58.707 ] 00:25:58.707 } 00:25:58.707 ] 00:25:58.707 } 00:25:58.965 [2024-06-07 21:22:21.396580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.965 [2024-06-07 21:22:21.451551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.422  Copying: 198/512 [MB] (198 MBps) Copying: 394/512 [MB] (195 MBps) Copying: 512/512 [MB] (average 197 MBps) 00:26:02.422 00:26:02.422 21:22:25 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:26:02.422 21:22:25 -- dd/malloc.sh@33 -- # gen_conf 00:26:02.422 21:22:25 -- dd/common.sh@31 -- # xtrace_disable 00:26:02.422 21:22:25 -- common/autotest_common.sh@10 -- # set +x 00:26:02.680 [2024-06-07 21:22:25.127949] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:02.680 [2024-06-07 21:22:25.128227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148866 ] 00:26:02.680 { 00:26:02.680 "subsystems": [ 00:26:02.680 { 00:26:02.680 "subsystem": "bdev", 00:26:02.680 "config": [ 00:26:02.680 { 00:26:02.680 "params": { 00:26:02.680 "num_blocks": 1048576, 00:26:02.680 "block_size": 512, 00:26:02.680 "name": "malloc0" 00:26:02.680 }, 00:26:02.680 "method": "bdev_malloc_create" 00:26:02.680 }, 00:26:02.680 { 00:26:02.680 "params": { 00:26:02.680 "num_blocks": 1048576, 00:26:02.680 "block_size": 512, 00:26:02.680 "name": "malloc1" 00:26:02.680 }, 00:26:02.680 "method": "bdev_malloc_create" 00:26:02.680 }, 00:26:02.680 { 00:26:02.680 "method": "bdev_wait_for_examine" 00:26:02.680 } 00:26:02.680 ] 00:26:02.680 } 00:26:02.680 ] 00:26:02.680 } 00:26:02.680 [2024-06-07 21:22:25.295334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.939 [2024-06-07 21:22:25.378914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.387  Copying: 194/512 [MB] (194 MBps) Copying: 396/512 [MB] (202 MBps) Copying: 512/512 [MB] (average 196 MBps) 00:26:06.387 00:26:06.387 00:26:06.387 real 0m7.845s 00:26:06.387 user 0m6.744s 00:26:06.387 sys 0m0.986s 00:26:06.387 21:22:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.387 21:22:29 -- common/autotest_common.sh@10 -- # set +x 00:26:06.387 ************************************ 00:26:06.387 END TEST dd_malloc_copy 00:26:06.387 ************************************ 00:26:06.647 00:26:06.647 real 0m7.969s 00:26:06.647 user 0m6.824s 00:26:06.647 sys 0m1.030s 00:26:06.648 21:22:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.648 ************************************ 00:26:06.648 END TEST spdk_dd_malloc 00:26:06.648 ************************************ 00:26:06.648 21:22:29 -- common/autotest_common.sh@10 -- # set +x 00:26:06.648 21:22:29 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:26:06.648 21:22:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:06.648 21:22:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:06.648 21:22:29 -- common/autotest_common.sh@10 -- # set +x 00:26:06.648 ************************************ 00:26:06.648 START TEST spdk_dd_bdev_to_bdev 00:26:06.648 ************************************ 00:26:06.648 21:22:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:26:06.648 * Looking for test storage... 00:26:06.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:06.648 21:22:29 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:06.648 21:22:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.648 21:22:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.648 21:22:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.648 21:22:29 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:06.648 21:22:29 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:06.648 21:22:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:06.648 21:22:29 -- paths/export.sh@5 -- # export PATH 00:26:06.648 21:22:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:06.648 21:22:29 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:26:06.648 21:22:29 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:26:06.648 21:22:29 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:26:06.648 21:22:29 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:26:06.648 21:22:29 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:26:06.648 21:22:29 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:26:06.648 21:22:29 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:26:06.648 21:22:29 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:26:06.648 21:22:29 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:26:06.648 21:22:29 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:26:06.648 21:22:29 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:26:06.648 21:22:29 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:26:06.648 21:22:29 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:26:06.648 21:22:29 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:26:06.648 [2024-06-07 21:22:29.227435] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:06.648 [2024-06-07 21:22:29.227695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148999 ] 00:26:06.907 [2024-06-07 21:22:29.382768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.907 [2024-06-07 21:22:29.461554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.424  Copying: 256/256 [MB] (average 1383 MBps) 00:26:07.424 00:26:07.424 21:22:30 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:07.424 21:22:30 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:07.424 21:22:30 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:26:07.424 21:22:30 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:26:07.424 21:22:30 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:26:07.424 21:22:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:26:07.424 21:22:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:07.424 21:22:30 -- common/autotest_common.sh@10 -- # set +x 00:26:07.424 ************************************ 00:26:07.424 START TEST dd_inflate_file 00:26:07.424 ************************************ 00:26:07.424 21:22:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:26:07.424 [2024-06-07 21:22:30.075255] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:07.424 [2024-06-07 21:22:30.075461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149016 ] 00:26:07.683 [2024-06-07 21:22:30.231759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.683 [2024-06-07 21:22:30.327379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.202  Copying: 64/64 [MB] (average 864 MBps) 00:26:08.202 00:26:08.202 00:26:08.202 real 0m0.733s 00:26:08.202 user 0m0.354s 00:26:08.202 sys 0m0.250s 00:26:08.202 ************************************ 00:26:08.202 21:22:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:08.202 21:22:30 -- common/autotest_common.sh@10 -- # set +x 00:26:08.202 END TEST dd_inflate_file 00:26:08.202 ************************************ 00:26:08.202 21:22:30 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:26:08.202 21:22:30 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:26:08.202 21:22:30 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:26:08.202 21:22:30 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:26:08.202 21:22:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:26:08.202 21:22:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:08.202 21:22:30 -- common/autotest_common.sh@10 -- # set +x 00:26:08.202 21:22:30 -- dd/common.sh@31 -- # xtrace_disable 00:26:08.202 21:22:30 -- common/autotest_common.sh@10 -- # set +x 00:26:08.202 ************************************ 00:26:08.202 START TEST dd_copy_to_out_bdev 00:26:08.202 ************************************ 00:26:08.202 21:22:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:26:08.202 [2024-06-07 21:22:30.870215] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:08.202 [2024-06-07 21:22:30.870689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149063 ] 00:26:08.202 { 00:26:08.202 "subsystems": [ 00:26:08.202 { 00:26:08.202 "subsystem": "bdev", 00:26:08.202 "config": [ 00:26:08.202 { 00:26:08.202 "params": { 00:26:08.202 "block_size": 4096, 00:26:08.202 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:08.202 "name": "aio1" 00:26:08.202 }, 00:26:08.202 "method": "bdev_aio_create" 00:26:08.202 }, 00:26:08.202 { 00:26:08.202 "params": { 00:26:08.202 "trtype": "pcie", 00:26:08.202 "traddr": "0000:00:06.0", 00:26:08.202 "name": "Nvme0" 00:26:08.202 }, 00:26:08.202 "method": "bdev_nvme_attach_controller" 00:26:08.202 }, 00:26:08.202 { 00:26:08.202 "method": "bdev_wait_for_examine" 00:26:08.202 } 00:26:08.202 ] 00:26:08.202 } 00:26:08.202 ] 00:26:08.202 } 00:26:08.461 [2024-06-07 21:22:31.038637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.461 [2024-06-07 21:22:31.102005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.978  Copying: 41/64 [MB] (41 MBps) Copying: 64/64 [MB] (average 40 MBps) 00:26:10.978 00:26:10.978 00:26:10.978 real 0m2.553s 00:26:10.978 user 0m2.145s 00:26:10.978 sys 0m0.305s 00:26:10.978 21:22:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:10.978 21:22:33 -- common/autotest_common.sh@10 -- # set +x 00:26:10.978 ************************************ 00:26:10.978 END TEST dd_copy_to_out_bdev 00:26:10.978 ************************************ 00:26:10.978 21:22:33 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:26:10.978 21:22:33 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:26:10.978 21:22:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:10.978 21:22:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:10.978 21:22:33 -- common/autotest_common.sh@10 -- # set +x 00:26:10.978 ************************************ 00:26:10.978 START TEST dd_offset_magic 00:26:10.978 ************************************ 00:26:10.978 21:22:33 -- common/autotest_common.sh@1104 -- # offset_magic 00:26:10.978 21:22:33 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:26:10.978 21:22:33 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:26:10.978 21:22:33 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:26:10.978 21:22:33 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:26:10.978 21:22:33 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:26:10.978 21:22:33 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:26:10.978 21:22:33 -- dd/common.sh@31 -- # xtrace_disable 00:26:10.978 21:22:33 -- common/autotest_common.sh@10 -- # set +x 00:26:10.978 [2024-06-07 21:22:33.471165] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:10.978 [2024-06-07 21:22:33.471348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149121 ] 00:26:10.978 { 00:26:10.978 "subsystems": [ 00:26:10.978 { 00:26:10.978 "subsystem": "bdev", 00:26:10.978 "config": [ 00:26:10.978 { 00:26:10.978 "params": { 00:26:10.978 "block_size": 4096, 00:26:10.978 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:10.978 "name": "aio1" 00:26:10.978 }, 00:26:10.978 "method": "bdev_aio_create" 00:26:10.978 }, 00:26:10.978 { 00:26:10.978 "params": { 00:26:10.978 "trtype": "pcie", 00:26:10.978 "traddr": "0000:00:06.0", 00:26:10.978 "name": "Nvme0" 00:26:10.978 }, 00:26:10.978 "method": "bdev_nvme_attach_controller" 00:26:10.978 }, 00:26:10.978 { 00:26:10.978 "method": "bdev_wait_for_examine" 00:26:10.978 } 00:26:10.978 ] 00:26:10.978 } 00:26:10.978 ] 00:26:10.978 } 00:26:10.978 [2024-06-07 21:22:33.626133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.236 [2024-06-07 21:22:33.719475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.062  Copying: 65/65 [MB] (average 345 MBps) 00:26:12.062 00:26:12.062 21:22:34 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:26:12.062 21:22:34 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:26:12.062 21:22:34 -- dd/common.sh@31 -- # xtrace_disable 00:26:12.062 21:22:34 -- common/autotest_common.sh@10 -- # set +x 00:26:12.062 { 00:26:12.062 "subsystems": [ 00:26:12.062 { 00:26:12.062 "subsystem": "bdev", 00:26:12.062 "config": [ 00:26:12.062 { 00:26:12.062 "params": { 00:26:12.062 "block_size": 4096, 00:26:12.062 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:12.062 "name": "aio1" 00:26:12.062 }, 00:26:12.062 "method": "bdev_aio_create" 00:26:12.062 }, 00:26:12.062 { 00:26:12.062 "params": { 00:26:12.062 "trtype": "pcie", 00:26:12.062 "traddr": "0000:00:06.0", 00:26:12.062 "name": "Nvme0" 00:26:12.062 }, 00:26:12.062 "method": "bdev_nvme_attach_controller" 00:26:12.062 }, 00:26:12.062 { 00:26:12.062 "method": "bdev_wait_for_examine" 00:26:12.062 } 00:26:12.062 ] 00:26:12.062 } 00:26:12.062 ] 00:26:12.062 } 00:26:12.062 [2024-06-07 21:22:34.632250] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:12.063 [2024-06-07 21:22:34.632531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149147 ] 00:26:12.322 [2024-06-07 21:22:34.800634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.322 [2024-06-07 21:22:34.893022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.148  Copying: 1024/1024 [kB] (average 500 MBps) 00:26:13.148 00:26:13.148 21:22:35 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:26:13.148 21:22:35 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:26:13.148 21:22:35 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:26:13.148 21:22:35 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:26:13.148 21:22:35 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:26:13.148 21:22:35 -- dd/common.sh@31 -- # xtrace_disable 00:26:13.148 21:22:35 -- common/autotest_common.sh@10 -- # set +x 00:26:13.148 [2024-06-07 21:22:35.672483] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:13.148 [2024-06-07 21:22:35.672722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149169 ] 00:26:13.148 { 00:26:13.148 "subsystems": [ 00:26:13.148 { 00:26:13.148 "subsystem": "bdev", 00:26:13.148 "config": [ 00:26:13.148 { 00:26:13.148 "params": { 00:26:13.148 "block_size": 4096, 00:26:13.148 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:13.148 "name": "aio1" 00:26:13.148 }, 00:26:13.148 "method": "bdev_aio_create" 00:26:13.148 }, 00:26:13.148 { 00:26:13.148 "params": { 00:26:13.148 "trtype": "pcie", 00:26:13.148 "traddr": "0000:00:06.0", 00:26:13.148 "name": "Nvme0" 00:26:13.148 }, 00:26:13.148 "method": "bdev_nvme_attach_controller" 00:26:13.148 }, 00:26:13.148 { 00:26:13.148 "method": "bdev_wait_for_examine" 00:26:13.148 } 00:26:13.148 ] 00:26:13.148 } 00:26:13.148 ] 00:26:13.148 } 00:26:13.407 [2024-06-07 21:22:35.839680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.407 [2024-06-07 21:22:35.947117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.233  Copying: 65/65 [MB] (average 317 MBps) 00:26:14.233 00:26:14.233 21:22:36 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:26:14.233 21:22:36 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:26:14.233 21:22:36 -- dd/common.sh@31 -- # xtrace_disable 00:26:14.233 21:22:36 -- common/autotest_common.sh@10 -- # set +x 00:26:14.491 [2024-06-07 21:22:36.942073] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:14.491 [2024-06-07 21:22:36.942350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149211 ] 00:26:14.491 { 00:26:14.491 "subsystems": [ 00:26:14.491 { 00:26:14.491 "subsystem": "bdev", 00:26:14.491 "config": [ 00:26:14.491 { 00:26:14.491 "params": { 00:26:14.491 "block_size": 4096, 00:26:14.491 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:14.491 "name": "aio1" 00:26:14.491 }, 00:26:14.491 "method": "bdev_aio_create" 00:26:14.491 }, 00:26:14.491 { 00:26:14.491 "params": { 00:26:14.491 "trtype": "pcie", 00:26:14.491 "traddr": "0000:00:06.0", 00:26:14.491 "name": "Nvme0" 00:26:14.491 }, 00:26:14.491 "method": "bdev_nvme_attach_controller" 00:26:14.491 }, 00:26:14.491 { 00:26:14.491 "method": "bdev_wait_for_examine" 00:26:14.491 } 00:26:14.491 ] 00:26:14.491 } 00:26:14.491 ] 00:26:14.491 } 00:26:14.491 [2024-06-07 21:22:37.110533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.750 [2024-06-07 21:22:37.219271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.576  Copying: 1024/1024 [kB] (average 500 MBps) 00:26:15.576 00:26:15.577 21:22:37 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:26:15.577 21:22:37 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:26:15.577 00:26:15.577 real 0m4.538s 00:26:15.577 user 0m2.547s 00:26:15.577 sys 0m1.227s 00:26:15.577 21:22:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:15.577 21:22:37 -- common/autotest_common.sh@10 -- # set +x 00:26:15.577 ************************************ 00:26:15.577 END TEST dd_offset_magic 00:26:15.577 ************************************ 00:26:15.577 21:22:38 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:26:15.577 21:22:38 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:26:15.577 21:22:38 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:15.577 21:22:38 -- dd/common.sh@11 -- # local nvme_ref= 00:26:15.577 21:22:38 -- dd/common.sh@12 -- # local size=4194330 00:26:15.577 21:22:38 -- dd/common.sh@14 -- # local bs=1048576 00:26:15.577 21:22:38 -- dd/common.sh@15 -- # local count=5 00:26:15.577 21:22:38 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:26:15.577 21:22:38 -- dd/common.sh@18 -- # gen_conf 00:26:15.577 21:22:38 -- dd/common.sh@31 -- # xtrace_disable 00:26:15.577 21:22:38 -- common/autotest_common.sh@10 -- # set +x 00:26:15.577 [2024-06-07 21:22:38.055996] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:15.577 [2024-06-07 21:22:38.056269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149248 ] 00:26:15.577 { 00:26:15.577 "subsystems": [ 00:26:15.577 { 00:26:15.577 "subsystem": "bdev", 00:26:15.577 "config": [ 00:26:15.577 { 00:26:15.577 "params": { 00:26:15.577 "block_size": 4096, 00:26:15.577 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:15.577 "name": "aio1" 00:26:15.577 }, 00:26:15.577 "method": "bdev_aio_create" 00:26:15.577 }, 00:26:15.577 { 00:26:15.577 "params": { 00:26:15.577 "trtype": "pcie", 00:26:15.577 "traddr": "0000:00:06.0", 00:26:15.577 "name": "Nvme0" 00:26:15.577 }, 00:26:15.577 "method": "bdev_nvme_attach_controller" 00:26:15.577 }, 00:26:15.577 { 00:26:15.577 "method": "bdev_wait_for_examine" 00:26:15.577 } 00:26:15.577 ] 00:26:15.577 } 00:26:15.577 ] 00:26:15.577 } 00:26:15.577 [2024-06-07 21:22:38.221616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.836 [2024-06-07 21:22:38.327033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.354  Copying: 5120/5120 [kB] (average 1000 MBps) 00:26:16.354 00:26:16.354 21:22:39 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:26:16.354 21:22:39 -- dd/common.sh@10 -- # local bdev=aio1 00:26:16.354 21:22:39 -- dd/common.sh@11 -- # local nvme_ref= 00:26:16.354 21:22:39 -- dd/common.sh@12 -- # local size=4194330 00:26:16.354 21:22:39 -- dd/common.sh@14 -- # local bs=1048576 00:26:16.354 21:22:39 -- dd/common.sh@15 -- # local count=5 00:26:16.354 21:22:39 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:26:16.354 21:22:39 -- dd/common.sh@18 -- # gen_conf 00:26:16.354 21:22:39 -- dd/common.sh@31 -- # xtrace_disable 00:26:16.354 21:22:39 -- common/autotest_common.sh@10 -- # set +x 00:26:16.613 [2024-06-07 21:22:39.069433] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:16.613 [2024-06-07 21:22:39.070370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149269 ] 00:26:16.613 { 00:26:16.613 "subsystems": [ 00:26:16.613 { 00:26:16.613 "subsystem": "bdev", 00:26:16.613 "config": [ 00:26:16.613 { 00:26:16.613 "params": { 00:26:16.613 "block_size": 4096, 00:26:16.613 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:16.613 "name": "aio1" 00:26:16.613 }, 00:26:16.613 "method": "bdev_aio_create" 00:26:16.613 }, 00:26:16.613 { 00:26:16.613 "params": { 00:26:16.613 "trtype": "pcie", 00:26:16.613 "traddr": "0000:00:06.0", 00:26:16.613 "name": "Nvme0" 00:26:16.613 }, 00:26:16.613 "method": "bdev_nvme_attach_controller" 00:26:16.613 }, 00:26:16.613 { 00:26:16.613 "method": "bdev_wait_for_examine" 00:26:16.613 } 00:26:16.613 ] 00:26:16.613 } 00:26:16.613 ] 00:26:16.613 } 00:26:16.613 [2024-06-07 21:22:39.240004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.871 [2024-06-07 21:22:39.333522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.389  Copying: 5120/5120 [kB] (average 312 MBps) 00:26:17.389 00:26:17.648 21:22:40 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:26:17.648 00:26:17.648 real 0m11.007s 00:26:17.648 user 0m6.749s 00:26:17.648 sys 0m2.877s 00:26:17.648 21:22:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:17.648 ************************************ 00:26:17.648 END TEST spdk_dd_bdev_to_bdev 00:26:17.648 21:22:40 -- common/autotest_common.sh@10 -- # set +x 00:26:17.648 ************************************ 00:26:17.648 21:22:40 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:26:17.648 21:22:40 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:26:17.648 21:22:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:17.648 21:22:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:17.648 21:22:40 -- common/autotest_common.sh@10 -- # set +x 00:26:17.648 ************************************ 00:26:17.648 START TEST spdk_dd_sparse 00:26:17.648 ************************************ 00:26:17.648 21:22:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:26:17.648 * Looking for test storage... 00:26:17.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:17.648 21:22:40 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:17.648 21:22:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.648 21:22:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.648 21:22:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.648 21:22:40 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:17.648 21:22:40 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:17.648 21:22:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:17.648 21:22:40 -- paths/export.sh@5 -- # export PATH 00:26:17.648 21:22:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:17.648 21:22:40 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:26:17.648 21:22:40 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:26:17.648 21:22:40 -- dd/sparse.sh@110 -- # file1=file_zero1 00:26:17.648 21:22:40 -- dd/sparse.sh@111 -- # file2=file_zero2 00:26:17.648 21:22:40 -- dd/sparse.sh@112 -- # file3=file_zero3 00:26:17.648 21:22:40 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:26:17.648 21:22:40 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:26:17.648 21:22:40 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:26:17.648 21:22:40 -- dd/sparse.sh@118 -- # prepare 00:26:17.648 21:22:40 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:26:17.648 21:22:40 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:26:17.648 1+0 records in 00:26:17.648 1+0 records out 00:26:17.648 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00756592 s, 554 MB/s 00:26:17.648 21:22:40 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:26:17.648 1+0 records in 00:26:17.648 1+0 records out 00:26:17.648 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00834626 s, 503 MB/s 00:26:17.648 21:22:40 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:26:17.648 1+0 records in 00:26:17.648 1+0 records out 00:26:17.648 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00856095 s, 490 MB/s 00:26:17.648 21:22:40 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:26:17.648 21:22:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:17.648 21:22:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:17.648 21:22:40 -- common/autotest_common.sh@10 -- # set +x 00:26:17.648 ************************************ 00:26:17.648 START TEST dd_sparse_file_to_file 00:26:17.648 ************************************ 00:26:17.648 21:22:40 -- common/autotest_common.sh@1104 -- # file_to_file 00:26:17.648 21:22:40 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:26:17.648 21:22:40 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:26:17.648 21:22:40 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:26:17.648 21:22:40 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:26:17.648 21:22:40 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:26:17.648 21:22:40 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:26:17.648 21:22:40 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:26:17.648 21:22:40 -- dd/sparse.sh@41 -- # gen_conf 00:26:17.648 21:22:40 -- dd/common.sh@31 -- # xtrace_disable 00:26:17.648 21:22:40 -- common/autotest_common.sh@10 -- # set +x 00:26:17.907 [2024-06-07 21:22:40.375344] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:17.907 [2024-06-07 21:22:40.376666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149348 ] 00:26:17.907 { 00:26:17.907 "subsystems": [ 00:26:17.907 { 00:26:17.907 "subsystem": "bdev", 00:26:17.907 "config": [ 00:26:17.907 { 00:26:17.907 "params": { 00:26:17.907 "block_size": 4096, 00:26:17.907 "filename": "dd_sparse_aio_disk", 00:26:17.907 "name": "dd_aio" 00:26:17.907 }, 00:26:17.907 "method": "bdev_aio_create" 00:26:17.907 }, 00:26:17.907 { 00:26:17.907 "params": { 00:26:17.907 "lvs_name": "dd_lvstore", 00:26:17.907 "bdev_name": "dd_aio" 00:26:17.907 }, 00:26:17.907 "method": "bdev_lvol_create_lvstore" 00:26:17.907 }, 00:26:17.907 { 00:26:17.907 "method": "bdev_wait_for_examine" 00:26:17.907 } 00:26:17.907 ] 00:26:17.907 } 00:26:17.907 ] 00:26:17.907 } 00:26:17.907 [2024-06-07 21:22:40.548313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.165 [2024-06-07 21:22:40.642354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.682  Copying: 12/36 [MB] (average 750 MBps) 00:26:18.682 00:26:18.941 21:22:41 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:26:18.941 21:22:41 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:26:18.941 21:22:41 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:26:18.941 21:22:41 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:26:18.941 21:22:41 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:26:18.941 21:22:41 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:26:18.941 21:22:41 -- dd/sparse.sh@52 -- # stat1_b=24576 00:26:18.941 21:22:41 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:26:18.941 ************************************ 00:26:18.941 END TEST dd_sparse_file_to_file 00:26:18.941 ************************************ 00:26:18.941 21:22:41 -- dd/sparse.sh@53 -- # stat2_b=24576 00:26:18.941 21:22:41 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:26:18.941 00:26:18.941 real 0m1.075s 00:26:18.941 user 0m0.606s 00:26:18.941 sys 0m0.347s 00:26:18.941 21:22:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:18.941 21:22:41 -- common/autotest_common.sh@10 -- # set +x 00:26:18.941 21:22:41 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:26:18.941 21:22:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:18.941 21:22:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:18.941 21:22:41 -- common/autotest_common.sh@10 -- # set +x 00:26:18.941 ************************************ 00:26:18.941 START TEST dd_sparse_file_to_bdev 00:26:18.941 ************************************ 00:26:18.941 21:22:41 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:26:18.941 21:22:41 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:26:18.941 21:22:41 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:26:18.941 21:22:41 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size"]=37748736 ["thin_provision"]=true) 00:26:18.941 21:22:41 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:26:18.941 21:22:41 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:26:18.941 21:22:41 -- dd/sparse.sh@73 -- # gen_conf 00:26:18.941 21:22:41 -- dd/common.sh@31 -- # xtrace_disable 00:26:18.941 21:22:41 -- common/autotest_common.sh@10 -- # set +x 00:26:18.941 [2024-06-07 21:22:41.488408] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:18.941 [2024-06-07 21:22:41.488656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149395 ] 00:26:18.941 { 00:26:18.941 "subsystems": [ 00:26:18.941 { 00:26:18.941 "subsystem": "bdev", 00:26:18.941 "config": [ 00:26:18.941 { 00:26:18.941 "params": { 00:26:18.941 "block_size": 4096, 00:26:18.941 "filename": "dd_sparse_aio_disk", 00:26:18.941 "name": "dd_aio" 00:26:18.941 }, 00:26:18.941 "method": "bdev_aio_create" 00:26:18.941 }, 00:26:18.941 { 00:26:18.941 "params": { 00:26:18.941 "lvs_name": "dd_lvstore", 00:26:18.941 "thin_provision": true, 00:26:18.941 "lvol_name": "dd_lvol", 00:26:18.941 "size": 37748736 00:26:18.941 }, 00:26:18.941 "method": "bdev_lvol_create" 00:26:18.941 }, 00:26:18.941 { 00:26:18.941 "method": "bdev_wait_for_examine" 00:26:18.941 } 00:26:18.941 ] 00:26:18.941 } 00:26:18.941 ] 00:26:18.941 } 00:26:19.200 [2024-06-07 21:22:41.653236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.200 [2024-06-07 21:22:41.749244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.459 [2024-06-07 21:22:41.894532] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:26:19.459  Copying: 12/36 [MB] (average 375 MBps)[2024-06-07 21:22:41.950194] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:26:20.027 00:26:20.027 00:26:20.027 ************************************ 00:26:20.027 END TEST dd_sparse_file_to_bdev 00:26:20.027 ************************************ 00:26:20.027 00:26:20.027 real 0m0.976s 00:26:20.027 user 0m0.578s 00:26:20.027 sys 0m0.308s 00:26:20.027 21:22:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:20.027 21:22:42 -- common/autotest_common.sh@10 -- # set +x 00:26:20.027 21:22:42 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:26:20.027 21:22:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:20.027 21:22:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:20.027 21:22:42 -- common/autotest_common.sh@10 -- # set +x 00:26:20.027 ************************************ 00:26:20.027 START TEST dd_sparse_bdev_to_file 00:26:20.027 ************************************ 00:26:20.027 21:22:42 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:26:20.027 21:22:42 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:26:20.027 21:22:42 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:26:20.027 21:22:42 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:26:20.027 21:22:42 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:26:20.027 21:22:42 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:26:20.027 21:22:42 -- dd/sparse.sh@91 -- # gen_conf 00:26:20.027 21:22:42 -- dd/common.sh@31 -- # xtrace_disable 00:26:20.027 21:22:42 -- common/autotest_common.sh@10 -- # set +x 00:26:20.027 [2024-06-07 21:22:42.521843] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:20.027 [2024-06-07 21:22:42.522091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149442 ] 00:26:20.027 { 00:26:20.027 "subsystems": [ 00:26:20.027 { 00:26:20.027 "subsystem": "bdev", 00:26:20.027 "config": [ 00:26:20.027 { 00:26:20.027 "params": { 00:26:20.027 "block_size": 4096, 00:26:20.027 "filename": "dd_sparse_aio_disk", 00:26:20.027 "name": "dd_aio" 00:26:20.027 }, 00:26:20.027 "method": "bdev_aio_create" 00:26:20.027 }, 00:26:20.027 { 00:26:20.027 "method": "bdev_wait_for_examine" 00:26:20.027 } 00:26:20.027 ] 00:26:20.027 } 00:26:20.027 ] 00:26:20.027 } 00:26:20.027 [2024-06-07 21:22:42.691008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.285 [2024-06-07 21:22:42.784657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.811  Copying: 12/36 [MB] (average 750 MBps) 00:26:20.811 00:26:20.811 21:22:43 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:26:20.811 21:22:43 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:26:20.811 21:22:43 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:26:20.811 21:22:43 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:26:20.811 21:22:43 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:26:20.811 21:22:43 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:26:20.811 21:22:43 -- dd/sparse.sh@102 -- # stat2_b=24576 00:26:20.811 21:22:43 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:26:20.811 21:22:43 -- dd/sparse.sh@103 -- # stat3_b=24576 00:26:20.811 21:22:43 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:26:20.811 ************************************ 00:26:20.811 END TEST dd_sparse_bdev_to_file 00:26:20.811 ************************************ 00:26:20.811 00:26:20.811 real 0m0.967s 00:26:20.811 user 0m0.570s 00:26:20.811 sys 0m0.301s 00:26:20.811 21:22:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:20.811 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:26:20.811 21:22:43 -- dd/sparse.sh@1 -- # cleanup 00:26:20.811 21:22:43 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:26:20.811 21:22:43 -- dd/sparse.sh@12 -- # rm file_zero1 00:26:20.811 21:22:43 -- dd/sparse.sh@13 -- # rm file_zero2 00:26:21.082 21:22:43 -- dd/sparse.sh@14 -- # rm file_zero3 00:26:21.082 00:26:21.082 real 0m3.328s 00:26:21.082 user 0m1.904s 00:26:21.082 sys 0m1.107s 00:26:21.082 ************************************ 00:26:21.082 END TEST spdk_dd_sparse 00:26:21.082 ************************************ 00:26:21.082 21:22:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.082 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:26:21.082 21:22:43 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:26:21.082 21:22:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.082 21:22:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.082 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:26:21.082 ************************************ 00:26:21.082 START TEST spdk_dd_negative 00:26:21.082 ************************************ 00:26:21.082 21:22:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:26:21.082 * Looking for test storage... 00:26:21.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:21.082 21:22:43 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:21.082 21:22:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.082 21:22:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.082 21:22:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.082 21:22:43 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:21.082 21:22:43 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:21.082 21:22:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:21.082 21:22:43 -- paths/export.sh@5 -- # export PATH 00:26:21.082 21:22:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:21.082 21:22:43 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:21.082 21:22:43 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:21.082 21:22:43 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:21.082 21:22:43 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:21.082 21:22:43 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:26:21.082 21:22:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.082 21:22:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.082 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:26:21.082 ************************************ 00:26:21.082 START TEST dd_invalid_arguments 00:26:21.082 ************************************ 00:26:21.082 21:22:43 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:26:21.082 21:22:43 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:26:21.082 21:22:43 -- common/autotest_common.sh@640 -- # local es=0 00:26:21.082 21:22:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:26:21.082 21:22:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.082 21:22:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.082 21:22:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.082 21:22:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.082 21:22:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.082 21:22:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.082 21:22:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.082 21:22:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:21.082 21:22:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:26:21.082 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:26:21.082 options: 00:26:21.082 -c, --config JSON config file (default none) 00:26:21.082 --json JSON config file (default none) 00:26:21.082 --json-ignore-init-errors 00:26:21.082 don't exit on invalid config entry 00:26:21.082 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:26:21.082 -g, --single-file-segments 00:26:21.082 force creating just one hugetlbfs file 00:26:21.082 -h, --help show this usage 00:26:21.083 -i, --shm-id shared memory ID (optional) 00:26:21.083 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:26:21.083 --lcores lcore to CPU mapping list. The list is in the format: 00:26:21.083 [<,lcores[@CPUs]>...] 00:26:21.083 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:26:21.083 Within the group, '-' is used for range separator, 00:26:21.083 ',' is used for single number separator. 00:26:21.083 '( )' can be omitted for single element group, 00:26:21.083 '@' can be omitted if cpus and lcores have the same value 00:26:21.083 -n, --mem-channels channel number of memory channels used for DPDK 00:26:21.083 -p, --main-core main (primary) core for DPDK 00:26:21.083 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:26:21.083 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:26:21.083 --disable-cpumask-locks Disable CPU core lock files. 00:26:21.083 --silence-noticelog disable notice level logging to stderr 00:26:21.083 --msg-mempool-size global message memory pool size in count (default: 262143) 00:26:21.083 -u, --no-pci disable PCI access 00:26:21.083 --wait-for-rpc wait for RPCs to initialize subsystems 00:26:21.083 --max-delay maximum reactor delay (in microseconds) 00:26:21.083 -B, --pci-blocked pci addr to block (can be used more than once) 00:26:21.083 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:26:21.083 -R, --huge-unlink unlink huge files after initialization 00:26:21.083 -v, --version print SPDK version 00:26:21.083 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:26:21.083 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:26:21.083 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:26:21.083 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:26:21.083 Tracepoints vary in size and can use more than one trace entry. 00:26:21.083 --rpcs-allowed comma-separated list of permitted RPCS 00:26:21.083 --env-context Opaque context for use of the env implementation 00:26:21.083 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:26:21.083 --no-huge run without using hugepages 00:26:21.083 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:26:21.083 -e, --tpoint-group [:] 00:26:21.083 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:26:21.083 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:26:21.083 Groups and /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:26:21.083 [2024-06-07 21:22:43.693755] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:26:21.083 masks can be combined (e.g. thread,bdev:0x1). 00:26:21.083 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:26:21.083 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:26:21.083 [--------- DD Options ---------] 00:26:21.083 --if Input file. Must specify either --if or --ib. 00:26:21.083 --ib Input bdev. Must specifier either --if or --ib 00:26:21.083 --of Output file. Must specify either --of or --ob. 00:26:21.083 --ob Output bdev. Must specify either --of or --ob. 00:26:21.083 --iflag Input file flags. 00:26:21.083 --oflag Output file flags. 00:26:21.083 --bs I/O unit size (default: 4096) 00:26:21.083 --qd Queue depth (default: 2) 00:26:21.083 --count I/O unit count. The number of I/O units to copy. (default: all) 00:26:21.083 --skip Skip this many I/O units at start of input. (default: 0) 00:26:21.083 --seek Skip this many I/O units at start of output. (default: 0) 00:26:21.083 --aio Force usage of AIO. (by default io_uring is used if available) 00:26:21.083 --sparse Enable hole skipping in input target 00:26:21.083 Available iflag and oflag values: 00:26:21.083 append - append mode 00:26:21.083 direct - use direct I/O for data 00:26:21.083 directory - fail unless a directory 00:26:21.083 dsync - use synchronized I/O for data 00:26:21.083 noatime - do not update access time 00:26:21.083 noctty - do not assign controlling terminal from file 00:26:21.083 nofollow - do not follow symlinks 00:26:21.083 nonblock - use non-blocking I/O 00:26:21.083 sync - use synchronized I/O for data and metadata 00:26:21.083 21:22:43 -- common/autotest_common.sh@643 -- # es=2 00:26:21.083 21:22:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:21.083 21:22:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:21.083 ************************************ 00:26:21.083 END TEST dd_invalid_arguments 00:26:21.083 ************************************ 00:26:21.083 21:22:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:21.083 00:26:21.083 real 0m0.094s 00:26:21.083 user 0m0.047s 00:26:21.083 sys 0m0.047s 00:26:21.083 21:22:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.083 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:26:21.369 21:22:43 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:26:21.369 21:22:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.369 21:22:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.369 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:26:21.369 ************************************ 00:26:21.369 START TEST dd_double_input 00:26:21.369 ************************************ 00:26:21.369 21:22:43 -- common/autotest_common.sh@1104 -- # double_input 00:26:21.369 21:22:43 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:26:21.369 21:22:43 -- common/autotest_common.sh@640 -- # local es=0 00:26:21.369 21:22:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:26:21.369 21:22:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.369 21:22:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.369 21:22:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.369 21:22:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.369 21:22:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.369 21:22:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.369 21:22:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.369 21:22:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:21.369 21:22:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:26:21.369 [2024-06-07 21:22:43.842690] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:26:21.369 21:22:43 -- common/autotest_common.sh@643 -- # es=22 00:26:21.369 21:22:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:21.369 21:22:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:21.369 21:22:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:21.369 ************************************ 00:26:21.369 END TEST dd_double_input 00:26:21.369 ************************************ 00:26:21.369 00:26:21.369 real 0m0.099s 00:26:21.369 user 0m0.043s 00:26:21.369 sys 0m0.057s 00:26:21.369 21:22:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.369 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:26:21.369 21:22:43 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:26:21.369 21:22:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.370 21:22:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.370 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:26:21.370 ************************************ 00:26:21.370 START TEST dd_double_output 00:26:21.370 ************************************ 00:26:21.370 21:22:43 -- common/autotest_common.sh@1104 -- # double_output 00:26:21.370 21:22:43 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:26:21.370 21:22:43 -- common/autotest_common.sh@640 -- # local es=0 00:26:21.370 21:22:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:26:21.370 21:22:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.370 21:22:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.370 21:22:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.370 21:22:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.370 21:22:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.370 21:22:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.370 21:22:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.370 21:22:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:21.370 21:22:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:26:21.370 [2024-06-07 21:22:43.999985] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:26:21.628 ************************************ 00:26:21.628 END TEST dd_double_output 00:26:21.628 ************************************ 00:26:21.628 21:22:44 -- common/autotest_common.sh@643 -- # es=22 00:26:21.628 21:22:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:21.628 21:22:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:21.628 21:22:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:21.628 00:26:21.628 real 0m0.100s 00:26:21.628 user 0m0.058s 00:26:21.628 sys 0m0.042s 00:26:21.628 21:22:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.628 21:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:21.628 21:22:44 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:26:21.628 21:22:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.628 21:22:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.628 21:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:21.628 ************************************ 00:26:21.628 START TEST dd_no_input 00:26:21.628 ************************************ 00:26:21.628 21:22:44 -- common/autotest_common.sh@1104 -- # no_input 00:26:21.628 21:22:44 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:26:21.628 21:22:44 -- common/autotest_common.sh@640 -- # local es=0 00:26:21.628 21:22:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:26:21.628 21:22:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.628 21:22:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.628 21:22:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.628 21:22:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.628 21:22:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.628 21:22:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.628 21:22:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.628 21:22:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:21.628 21:22:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:26:21.628 [2024-06-07 21:22:44.150507] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:26:21.629 ************************************ 00:26:21.629 END TEST dd_no_input 00:26:21.629 ************************************ 00:26:21.629 21:22:44 -- common/autotest_common.sh@643 -- # es=22 00:26:21.629 21:22:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:21.629 21:22:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:21.629 21:22:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:21.629 00:26:21.629 real 0m0.103s 00:26:21.629 user 0m0.053s 00:26:21.629 sys 0m0.050s 00:26:21.629 21:22:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.629 21:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:21.629 21:22:44 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:26:21.629 21:22:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.629 21:22:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.629 21:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:21.629 ************************************ 00:26:21.629 START TEST dd_no_output 00:26:21.629 ************************************ 00:26:21.629 21:22:44 -- common/autotest_common.sh@1104 -- # no_output 00:26:21.629 21:22:44 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:21.629 21:22:44 -- common/autotest_common.sh@640 -- # local es=0 00:26:21.629 21:22:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:21.629 21:22:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.629 21:22:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.629 21:22:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.629 21:22:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.629 21:22:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.629 21:22:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.629 21:22:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.629 21:22:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:21.629 21:22:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:21.887 [2024-06-07 21:22:44.304532] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:26:21.887 21:22:44 -- common/autotest_common.sh@643 -- # es=22 00:26:21.887 21:22:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:21.887 21:22:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:21.887 ************************************ 00:26:21.887 END TEST dd_no_output 00:26:21.887 ************************************ 00:26:21.887 21:22:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:21.887 00:26:21.887 real 0m0.093s 00:26:21.887 user 0m0.045s 00:26:21.887 sys 0m0.047s 00:26:21.887 21:22:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.887 21:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:21.887 21:22:44 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:26:21.887 21:22:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.887 21:22:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.887 21:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:21.887 ************************************ 00:26:21.887 START TEST dd_wrong_blocksize 00:26:21.887 ************************************ 00:26:21.887 21:22:44 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:26:21.887 21:22:44 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:26:21.887 21:22:44 -- common/autotest_common.sh@640 -- # local es=0 00:26:21.887 21:22:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:26:21.887 21:22:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.888 21:22:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.888 21:22:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.888 21:22:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.888 21:22:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.888 21:22:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.888 21:22:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.888 21:22:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:21.888 21:22:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:26:21.888 [2024-06-07 21:22:44.446629] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:26:21.888 21:22:44 -- common/autotest_common.sh@643 -- # es=22 00:26:21.888 21:22:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:21.888 21:22:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:21.888 21:22:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:21.888 00:26:21.888 real 0m0.091s 00:26:21.888 user 0m0.061s 00:26:21.888 sys 0m0.030s 00:26:21.888 21:22:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.888 21:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:21.888 ************************************ 00:26:21.888 END TEST dd_wrong_blocksize 00:26:21.888 ************************************ 00:26:21.888 21:22:44 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:26:21.888 21:22:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.888 21:22:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.888 21:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:21.888 ************************************ 00:26:21.888 START TEST dd_smaller_blocksize 00:26:21.888 ************************************ 00:26:21.888 21:22:44 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:26:21.888 21:22:44 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:26:21.888 21:22:44 -- common/autotest_common.sh@640 -- # local es=0 00:26:21.888 21:22:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:26:21.888 21:22:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.888 21:22:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.888 21:22:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.888 21:22:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.888 21:22:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.888 21:22:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:21.888 21:22:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.888 21:22:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:21.888 21:22:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:26:22.146 [2024-06-07 21:22:44.595813] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:22.146 [2024-06-07 21:22:44.596073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149706 ] 00:26:22.146 [2024-06-07 21:22:44.765967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.404 [2024-06-07 21:22:44.847658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.404 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:26:22.404 [2024-06-07 21:22:45.011066] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:26:22.404 [2024-06-07 21:22:45.011159] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:22.662 [2024-06-07 21:22:45.145456] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:22.662 21:22:45 -- common/autotest_common.sh@643 -- # es=244 00:26:22.662 21:22:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:22.662 21:22:45 -- common/autotest_common.sh@652 -- # es=116 00:26:22.662 21:22:45 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:22.662 21:22:45 -- common/autotest_common.sh@660 -- # es=1 00:26:22.662 21:22:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:22.662 00:26:22.662 real 0m0.719s 00:26:22.662 user 0m0.381s 00:26:22.662 sys 0m0.238s 00:26:22.662 ************************************ 00:26:22.662 END TEST dd_smaller_blocksize 00:26:22.662 ************************************ 00:26:22.662 21:22:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.662 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:26:22.662 21:22:45 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:26:22.662 21:22:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:22.662 21:22:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:22.662 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:26:22.662 ************************************ 00:26:22.662 START TEST dd_invalid_count 00:26:22.662 ************************************ 00:26:22.662 21:22:45 -- common/autotest_common.sh@1104 -- # invalid_count 00:26:22.662 21:22:45 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:26:22.662 21:22:45 -- common/autotest_common.sh@640 -- # local es=0 00:26:22.662 21:22:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:26:22.662 21:22:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.662 21:22:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.662 21:22:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.662 21:22:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.662 21:22:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.662 21:22:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.662 21:22:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.662 21:22:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:22.662 21:22:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:26:22.921 [2024-06-07 21:22:45.368495] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:26:22.921 21:22:45 -- common/autotest_common.sh@643 -- # es=22 00:26:22.921 21:22:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:22.921 21:22:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:22.921 21:22:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:22.921 00:26:22.921 real 0m0.101s 00:26:22.921 user 0m0.055s 00:26:22.921 sys 0m0.047s 00:26:22.921 21:22:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.921 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:26:22.921 ************************************ 00:26:22.921 END TEST dd_invalid_count 00:26:22.921 ************************************ 00:26:22.921 21:22:45 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:26:22.921 21:22:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:22.921 21:22:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:22.921 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:26:22.921 ************************************ 00:26:22.921 START TEST dd_invalid_oflag 00:26:22.921 ************************************ 00:26:22.921 21:22:45 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:26:22.921 21:22:45 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:26:22.921 21:22:45 -- common/autotest_common.sh@640 -- # local es=0 00:26:22.921 21:22:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:26:22.921 21:22:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.921 21:22:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.921 21:22:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.921 21:22:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.921 21:22:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.921 21:22:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.921 21:22:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.921 21:22:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:22.921 21:22:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:26:22.921 [2024-06-07 21:22:45.517429] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:26:22.921 21:22:45 -- common/autotest_common.sh@643 -- # es=22 00:26:22.921 21:22:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:22.921 21:22:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:22.921 21:22:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:22.921 ************************************ 00:26:22.921 END TEST dd_invalid_oflag 00:26:22.921 ************************************ 00:26:22.921 00:26:22.921 real 0m0.098s 00:26:22.921 user 0m0.038s 00:26:22.921 sys 0m0.061s 00:26:22.921 21:22:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.921 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:26:23.179 21:22:45 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:26:23.179 21:22:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:23.179 21:22:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:23.179 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:26:23.179 ************************************ 00:26:23.179 START TEST dd_invalid_iflag 00:26:23.179 ************************************ 00:26:23.179 21:22:45 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:26:23.179 21:22:45 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:26:23.179 21:22:45 -- common/autotest_common.sh@640 -- # local es=0 00:26:23.179 21:22:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:26:23.179 21:22:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.179 21:22:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.179 21:22:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.179 21:22:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.179 21:22:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.179 21:22:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.180 21:22:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.180 21:22:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:23.180 21:22:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:26:23.180 [2024-06-07 21:22:45.666358] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:26:23.180 21:22:45 -- common/autotest_common.sh@643 -- # es=22 00:26:23.180 21:22:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:23.180 21:22:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:23.180 21:22:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:23.180 00:26:23.180 real 0m0.098s 00:26:23.180 user 0m0.046s 00:26:23.180 sys 0m0.053s 00:26:23.180 21:22:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.180 ************************************ 00:26:23.180 END TEST dd_invalid_iflag 00:26:23.180 ************************************ 00:26:23.180 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:26:23.180 21:22:45 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:26:23.180 21:22:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:23.180 21:22:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:23.180 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:26:23.180 ************************************ 00:26:23.180 START TEST dd_unknown_flag 00:26:23.180 ************************************ 00:26:23.180 21:22:45 -- common/autotest_common.sh@1104 -- # unknown_flag 00:26:23.180 21:22:45 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:26:23.180 21:22:45 -- common/autotest_common.sh@640 -- # local es=0 00:26:23.180 21:22:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:26:23.180 21:22:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.180 21:22:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.180 21:22:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.180 21:22:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.180 21:22:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.180 21:22:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.180 21:22:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.180 21:22:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:23.180 21:22:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:26:23.180 [2024-06-07 21:22:45.809706] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:23.180 [2024-06-07 21:22:45.809901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149817 ] 00:26:23.438 [2024-06-07 21:22:45.957862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.438 [2024-06-07 21:22:46.024832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.696 [2024-06-07 21:22:46.114483] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:26:23.696 [2024-06-07 21:22:46.114600] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:26:23.696 [2024-06-07 21:22:46.114640] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:26:23.696 [2024-06-07 21:22:46.114688] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:23.696 [2024-06-07 21:22:46.236633] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:23.697 21:22:46 -- common/autotest_common.sh@643 -- # es=234 00:26:23.697 21:22:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:23.697 21:22:46 -- common/autotest_common.sh@652 -- # es=106 00:26:23.697 21:22:46 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:23.697 21:22:46 -- common/autotest_common.sh@660 -- # es=1 00:26:23.697 21:22:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:23.697 ************************************ 00:26:23.697 END TEST dd_unknown_flag 00:26:23.697 ************************************ 00:26:23.697 00:26:23.697 real 0m0.589s 00:26:23.697 user 0m0.294s 00:26:23.697 sys 0m0.195s 00:26:23.697 21:22:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.697 21:22:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.955 21:22:46 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:26:23.955 21:22:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:23.955 21:22:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:23.955 21:22:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.955 ************************************ 00:26:23.955 START TEST dd_invalid_json 00:26:23.955 ************************************ 00:26:23.955 21:22:46 -- common/autotest_common.sh@1104 -- # invalid_json 00:26:23.955 21:22:46 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:26:23.955 21:22:46 -- common/autotest_common.sh@640 -- # local es=0 00:26:23.955 21:22:46 -- dd/negative_dd.sh@95 -- # : 00:26:23.955 21:22:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:26:23.955 21:22:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.955 21:22:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.955 21:22:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.955 21:22:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.955 21:22:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.955 21:22:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.955 21:22:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.955 21:22:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:23.955 21:22:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:26:23.955 [2024-06-07 21:22:46.447710] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:23.955 [2024-06-07 21:22:46.447890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149842 ] 00:26:23.955 [2024-06-07 21:22:46.605696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.213 [2024-06-07 21:22:46.702547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.213 [2024-06-07 21:22:46.702732] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:26:24.213 [2024-06-07 21:22:46.702781] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:24.213 [2024-06-07 21:22:46.702852] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:24.213 21:22:46 -- common/autotest_common.sh@643 -- # es=234 00:26:24.213 21:22:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:24.213 21:22:46 -- common/autotest_common.sh@652 -- # es=106 00:26:24.213 21:22:46 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:24.213 21:22:46 -- common/autotest_common.sh@660 -- # es=1 00:26:24.213 21:22:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:24.213 00:26:24.213 real 0m0.418s 00:26:24.213 user 0m0.212s 00:26:24.213 sys 0m0.104s 00:26:24.213 ************************************ 00:26:24.213 END TEST dd_invalid_json 00:26:24.213 ************************************ 00:26:24.213 21:22:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:24.213 21:22:46 -- common/autotest_common.sh@10 -- # set +x 00:26:24.213 ************************************ 00:26:24.213 END TEST spdk_dd_negative 00:26:24.213 ************************************ 00:26:24.213 00:26:24.213 real 0m3.312s 00:26:24.213 user 0m1.736s 00:26:24.213 sys 0m1.220s 00:26:24.213 21:22:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:24.213 21:22:46 -- common/autotest_common.sh@10 -- # set +x 00:26:24.471 00:26:24.471 real 1m18.649s 00:26:24.471 user 0m47.312s 00:26:24.471 sys 0m21.507s 00:26:24.471 ************************************ 00:26:24.471 END TEST spdk_dd 00:26:24.471 ************************************ 00:26:24.471 21:22:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:24.472 21:22:46 -- common/autotest_common.sh@10 -- # set +x 00:26:24.472 21:22:46 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:26:24.472 21:22:46 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:26:24.472 21:22:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:24.472 21:22:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:24.472 21:22:46 -- common/autotest_common.sh@10 -- # set +x 00:26:24.472 ************************************ 00:26:24.472 START TEST blockdev_nvme 00:26:24.472 ************************************ 00:26:24.472 21:22:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:26:24.472 * Looking for test storage... 00:26:24.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:26:24.472 21:22:47 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:24.472 21:22:47 -- bdev/nbd_common.sh@6 -- # set -e 00:26:24.472 21:22:47 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:24.472 21:22:47 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:24.472 21:22:47 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:26:24.472 21:22:47 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:26:24.472 21:22:47 -- bdev/blockdev.sh@18 -- # : 00:26:24.472 21:22:47 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:26:24.472 21:22:47 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:26:24.472 21:22:47 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:26:24.472 21:22:47 -- bdev/blockdev.sh@672 -- # uname -s 00:26:24.472 21:22:47 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:26:24.472 21:22:47 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:26:24.472 21:22:47 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:26:24.472 21:22:47 -- bdev/blockdev.sh@681 -- # crypto_device= 00:26:24.472 21:22:47 -- bdev/blockdev.sh@682 -- # dek= 00:26:24.472 21:22:47 -- bdev/blockdev.sh@683 -- # env_ctx= 00:26:24.472 21:22:47 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:26:24.472 21:22:47 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:26:24.472 21:22:47 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:26:24.472 21:22:47 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:26:24.472 21:22:47 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:26:24.472 21:22:47 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=149956 00:26:24.472 21:22:47 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:24.472 21:22:47 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:24.472 21:22:47 -- bdev/blockdev.sh@47 -- # waitforlisten 149956 00:26:24.472 21:22:47 -- common/autotest_common.sh@819 -- # '[' -z 149956 ']' 00:26:24.472 21:22:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.472 21:22:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:24.472 21:22:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.472 21:22:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:24.472 21:22:47 -- common/autotest_common.sh@10 -- # set +x 00:26:24.472 [2024-06-07 21:22:47.084186] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:24.472 [2024-06-07 21:22:47.084428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149956 ] 00:26:24.730 [2024-06-07 21:22:47.248219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.730 [2024-06-07 21:22:47.312986] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:24.730 [2024-06-07 21:22:47.313217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.667 21:22:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:25.667 21:22:47 -- common/autotest_common.sh@852 -- # return 0 00:26:25.667 21:22:47 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:26:25.667 21:22:47 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:26:25.667 21:22:47 -- bdev/blockdev.sh@79 -- # local json 00:26:25.667 21:22:47 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:26:25.667 21:22:47 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:25.667 21:22:48 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:26:25.667 21:22:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.667 21:22:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.667 21:22:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.667 21:22:48 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:26:25.667 21:22:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.667 21:22:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.667 21:22:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.667 21:22:48 -- bdev/blockdev.sh@738 -- # cat 00:26:25.667 21:22:48 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:26:25.667 21:22:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.667 21:22:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.667 21:22:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.667 21:22:48 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:26:25.667 21:22:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.667 21:22:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.667 21:22:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.667 21:22:48 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:26:25.667 21:22:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.667 21:22:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.667 21:22:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.667 21:22:48 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:26:25.667 21:22:48 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:26:25.667 21:22:48 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:26:25.667 21:22:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.667 21:22:48 -- common/autotest_common.sh@10 -- # set +x 00:26:25.667 21:22:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.667 21:22:48 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:26:25.667 21:22:48 -- bdev/blockdev.sh@747 -- # jq -r .name 00:26:25.667 21:22:48 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b5ac645a-88be-4c82-a4f5-a3b6479a11e6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b5ac645a-88be-4c82-a4f5-a3b6479a11e6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:26:25.667 21:22:48 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:26:25.667 21:22:48 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:26:25.667 21:22:48 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:26:25.667 21:22:48 -- bdev/blockdev.sh@752 -- # killprocess 149956 00:26:25.667 21:22:48 -- common/autotest_common.sh@926 -- # '[' -z 149956 ']' 00:26:25.667 21:22:48 -- common/autotest_common.sh@930 -- # kill -0 149956 00:26:25.667 21:22:48 -- common/autotest_common.sh@931 -- # uname 00:26:25.667 21:22:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:25.667 21:22:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 149956 00:26:25.667 21:22:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:25.667 21:22:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:25.667 killing process with pid 149956 00:26:25.667 21:22:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 149956' 00:26:25.667 21:22:48 -- common/autotest_common.sh@945 -- # kill 149956 00:26:25.667 21:22:48 -- common/autotest_common.sh@950 -- # wait 149956 00:26:26.235 21:22:48 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:26.235 21:22:48 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:26:26.235 21:22:48 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:26:26.235 21:22:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:26.235 21:22:48 -- common/autotest_common.sh@10 -- # set +x 00:26:26.235 ************************************ 00:26:26.235 START TEST bdev_hello_world 00:26:26.235 ************************************ 00:26:26.235 21:22:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:26:26.235 [2024-06-07 21:22:48.779966] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:26.235 [2024-06-07 21:22:48.780201] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150023 ] 00:26:26.494 [2024-06-07 21:22:48.937415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.495 [2024-06-07 21:22:49.021976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.754 [2024-06-07 21:22:49.234918] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:26:26.754 [2024-06-07 21:22:49.235004] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:26:26.754 [2024-06-07 21:22:49.235050] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:26:26.754 [2024-06-07 21:22:49.237519] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:26:26.754 [2024-06-07 21:22:49.238058] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:26:26.754 [2024-06-07 21:22:49.238106] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:26:26.754 [2024-06-07 21:22:49.238397] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:26:26.754 00:26:26.754 [2024-06-07 21:22:49.238452] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:26:27.013 00:26:27.013 real 0m0.740s 00:26:27.013 user 0m0.452s 00:26:27.013 sys 0m0.188s 00:26:27.013 ************************************ 00:26:27.013 END TEST bdev_hello_world 00:26:27.013 ************************************ 00:26:27.013 21:22:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:27.013 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:26:27.013 21:22:49 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:26:27.013 21:22:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:27.013 21:22:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:27.013 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:26:27.013 ************************************ 00:26:27.013 START TEST bdev_bounds 00:26:27.013 ************************************ 00:26:27.013 21:22:49 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:26:27.013 21:22:49 -- bdev/blockdev.sh@288 -- # bdevio_pid=150052 00:26:27.013 Process bdevio pid: 150052 00:26:27.013 21:22:49 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:27.013 21:22:49 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:26:27.013 21:22:49 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 150052' 00:26:27.013 21:22:49 -- bdev/blockdev.sh@291 -- # waitforlisten 150052 00:26:27.013 21:22:49 -- common/autotest_common.sh@819 -- # '[' -z 150052 ']' 00:26:27.013 21:22:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.013 21:22:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:27.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.013 21:22:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.013 21:22:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:27.013 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:26:27.013 [2024-06-07 21:22:49.573476] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:27.013 [2024-06-07 21:22:49.573686] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150052 ] 00:26:27.272 [2024-06-07 21:22:49.742617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:27.272 [2024-06-07 21:22:49.832678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.272 [2024-06-07 21:22:49.832784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.272 [2024-06-07 21:22:49.832790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.208 21:22:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:28.208 21:22:50 -- common/autotest_common.sh@852 -- # return 0 00:26:28.208 21:22:50 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:26:28.208 I/O targets: 00:26:28.208 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:26:28.208 00:26:28.208 00:26:28.208 CUnit - A unit testing framework for C - Version 2.1-3 00:26:28.208 http://cunit.sourceforge.net/ 00:26:28.208 00:26:28.208 00:26:28.208 Suite: bdevio tests on: Nvme0n1 00:26:28.208 Test: blockdev write read block ...passed 00:26:28.208 Test: blockdev write zeroes read block ...passed 00:26:28.208 Test: blockdev write zeroes read no split ...passed 00:26:28.208 Test: blockdev write zeroes read split ...passed 00:26:28.208 Test: blockdev write zeroes read split partial ...passed 00:26:28.208 Test: blockdev reset ...[2024-06-07 21:22:50.636181] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:26:28.208 [2024-06-07 21:22:50.638382] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:28.208 passed 00:26:28.208 Test: blockdev write read 8 blocks ...passed 00:26:28.208 Test: blockdev write read size > 128k ...passed 00:26:28.208 Test: blockdev write read invalid size ...passed 00:26:28.208 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:28.208 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:28.208 Test: blockdev write read max offset ...passed 00:26:28.208 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:28.208 Test: blockdev writev readv 8 blocks ...passed 00:26:28.208 Test: blockdev writev readv 30 x 1block ...passed 00:26:28.208 Test: blockdev writev readv block ...passed 00:26:28.208 Test: blockdev writev readv size > 128k ...passed 00:26:28.208 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:28.208 Test: blockdev comparev and writev ...[2024-06-07 21:22:50.644406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27e0d000 len:0x1000 00:26:28.208 passed 00:26:28.208 Test: blockdev nvme passthru rw ...[2024-06-07 21:22:50.644573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:26:28.208 passed 00:26:28.208 Test: blockdev nvme passthru vendor specific ...[2024-06-07 21:22:50.645448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:26:28.208 [2024-06-07 21:22:50.645506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:26:28.208 passed 00:26:28.208 Test: blockdev nvme admin passthru ...passed 00:26:28.208 Test: blockdev copy ...passed 00:26:28.208 00:26:28.208 Run Summary: Type Total Ran Passed Failed Inactive 00:26:28.208 suites 1 1 n/a 0 0 00:26:28.208 tests 23 23 23 0 0 00:26:28.208 asserts 152 152 152 0 n/a 00:26:28.208 00:26:28.208 Elapsed time = 0.058 seconds 00:26:28.208 0 00:26:28.208 21:22:50 -- bdev/blockdev.sh@293 -- # killprocess 150052 00:26:28.208 21:22:50 -- common/autotest_common.sh@926 -- # '[' -z 150052 ']' 00:26:28.208 21:22:50 -- common/autotest_common.sh@930 -- # kill -0 150052 00:26:28.208 21:22:50 -- common/autotest_common.sh@931 -- # uname 00:26:28.208 21:22:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:28.208 21:22:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150052 00:26:28.208 21:22:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:28.208 killing process with pid 150052 00:26:28.208 21:22:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:28.208 21:22:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150052' 00:26:28.208 21:22:50 -- common/autotest_common.sh@945 -- # kill 150052 00:26:28.208 21:22:50 -- common/autotest_common.sh@950 -- # wait 150052 00:26:28.467 21:22:50 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:26:28.467 00:26:28.467 real 0m1.360s 00:26:28.467 user 0m3.435s 00:26:28.467 sys 0m0.283s 00:26:28.467 ************************************ 00:26:28.467 END TEST bdev_bounds 00:26:28.467 ************************************ 00:26:28.467 21:22:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.467 21:22:50 -- common/autotest_common.sh@10 -- # set +x 00:26:28.467 21:22:50 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:26:28.467 21:22:50 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:26:28.467 21:22:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:28.467 21:22:50 -- common/autotest_common.sh@10 -- # set +x 00:26:28.467 ************************************ 00:26:28.467 START TEST bdev_nbd 00:26:28.467 ************************************ 00:26:28.467 21:22:50 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:26:28.467 21:22:50 -- bdev/blockdev.sh@298 -- # uname -s 00:26:28.467 21:22:50 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:26:28.467 21:22:50 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:28.467 21:22:50 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:28.467 21:22:50 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:26:28.467 21:22:50 -- bdev/blockdev.sh@302 -- # local bdev_all 00:26:28.467 21:22:50 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:26:28.467 21:22:50 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:26:28.467 21:22:50 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:26:28.467 21:22:50 -- bdev/blockdev.sh@309 -- # local nbd_all 00:26:28.467 21:22:50 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:26:28.467 21:22:50 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:26:28.467 21:22:50 -- bdev/blockdev.sh@312 -- # local nbd_list 00:26:28.467 21:22:50 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:26:28.467 21:22:50 -- bdev/blockdev.sh@313 -- # local bdev_list 00:26:28.467 21:22:50 -- bdev/blockdev.sh@316 -- # nbd_pid=150109 00:26:28.467 21:22:50 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:26:28.467 21:22:50 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:28.467 21:22:50 -- bdev/blockdev.sh@318 -- # waitforlisten 150109 /var/tmp/spdk-nbd.sock 00:26:28.467 21:22:50 -- common/autotest_common.sh@819 -- # '[' -z 150109 ']' 00:26:28.467 21:22:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:28.467 21:22:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:28.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:28.467 21:22:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:28.467 21:22:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:28.467 21:22:50 -- common/autotest_common.sh@10 -- # set +x 00:26:28.467 [2024-06-07 21:22:50.995873] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:28.467 [2024-06-07 21:22:50.996089] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.726 [2024-06-07 21:22:51.150641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.726 [2024-06-07 21:22:51.217069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.294 21:22:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:29.294 21:22:51 -- common/autotest_common.sh@852 -- # return 0 00:26:29.294 21:22:51 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:26:29.294 21:22:51 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:29.294 21:22:51 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:26:29.294 21:22:51 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:26:29.294 21:22:51 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:26:29.294 21:22:51 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:29.294 21:22:51 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:26:29.294 21:22:51 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:26:29.294 21:22:51 -- bdev/nbd_common.sh@24 -- # local i 00:26:29.294 21:22:51 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:26:29.294 21:22:51 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:26:29.294 21:22:51 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:26:29.294 21:22:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:26:29.552 21:22:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:26:29.552 21:22:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:26:29.552 21:22:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:26:29.552 21:22:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:29.552 21:22:52 -- common/autotest_common.sh@857 -- # local i 00:26:29.553 21:22:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:29.553 21:22:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:29.553 21:22:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:29.553 21:22:52 -- common/autotest_common.sh@861 -- # break 00:26:29.553 21:22:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:29.553 21:22:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:29.553 21:22:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:29.553 1+0 records in 00:26:29.553 1+0 records out 00:26:29.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456821 s, 9.0 MB/s 00:26:29.553 21:22:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:29.553 21:22:52 -- common/autotest_common.sh@874 -- # size=4096 00:26:29.553 21:22:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:29.553 21:22:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:29.553 21:22:52 -- common/autotest_common.sh@877 -- # return 0 00:26:29.553 21:22:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:29.553 21:22:52 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:26:29.553 21:22:52 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:29.811 21:22:52 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:26:29.811 { 00:26:29.811 "nbd_device": "/dev/nbd0", 00:26:29.811 "bdev_name": "Nvme0n1" 00:26:29.811 } 00:26:29.811 ]' 00:26:29.811 21:22:52 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:26:29.811 21:22:52 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:26:29.811 21:22:52 -- bdev/nbd_common.sh@119 -- # echo '[ 00:26:29.811 { 00:26:29.811 "nbd_device": "/dev/nbd0", 00:26:29.811 "bdev_name": "Nvme0n1" 00:26:29.811 } 00:26:29.811 ]' 00:26:29.811 21:22:52 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:29.811 21:22:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:29.811 21:22:52 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:29.811 21:22:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:29.811 21:22:52 -- bdev/nbd_common.sh@51 -- # local i 00:26:29.811 21:22:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:29.811 21:22:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:30.070 21:22:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:30.070 21:22:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:30.070 21:22:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:30.070 21:22:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:30.070 21:22:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:30.070 21:22:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:30.070 21:22:52 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:30.070 21:22:52 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:30.070 21:22:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:30.070 21:22:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:30.070 21:22:52 -- bdev/nbd_common.sh@41 -- # break 00:26:30.070 21:22:52 -- bdev/nbd_common.sh@45 -- # return 0 00:26:30.070 21:22:52 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:30.070 21:22:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:30.070 21:22:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@65 -- # true 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@65 -- # count=0 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@122 -- # count=0 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@127 -- # return 0 00:26:30.328 21:22:52 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:30.328 21:22:52 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:26:30.329 21:22:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:30.329 21:22:52 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:26:30.329 21:22:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:30.329 21:22:52 -- bdev/nbd_common.sh@12 -- # local i 00:26:30.329 21:22:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:30.329 21:22:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:30.329 21:22:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:26:30.586 /dev/nbd0 00:26:30.586 21:22:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:30.586 21:22:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:30.586 21:22:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:30.586 21:22:53 -- common/autotest_common.sh@857 -- # local i 00:26:30.586 21:22:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:30.586 21:22:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:30.586 21:22:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:30.586 21:22:53 -- common/autotest_common.sh@861 -- # break 00:26:30.587 21:22:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:30.587 21:22:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:30.587 21:22:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:30.587 1+0 records in 00:26:30.587 1+0 records out 00:26:30.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475133 s, 8.6 MB/s 00:26:30.587 21:22:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:30.587 21:22:53 -- common/autotest_common.sh@874 -- # size=4096 00:26:30.587 21:22:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:30.587 21:22:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:30.587 21:22:53 -- common/autotest_common.sh@877 -- # return 0 00:26:30.587 21:22:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:30.587 21:22:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:30.587 21:22:53 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:30.587 21:22:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:30.587 21:22:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:30.845 { 00:26:30.845 "nbd_device": "/dev/nbd0", 00:26:30.845 "bdev_name": "Nvme0n1" 00:26:30.845 } 00:26:30.845 ]' 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:30.845 { 00:26:30.845 "nbd_device": "/dev/nbd0", 00:26:30.845 "bdev_name": "Nvme0n1" 00:26:30.845 } 00:26:30.845 ]' 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@65 -- # count=1 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@66 -- # echo 1 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@95 -- # count=1 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:30.845 256+0 records in 00:26:30.845 256+0 records out 00:26:30.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00760457 s, 138 MB/s 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:30.845 256+0 records in 00:26:30.845 256+0 records out 00:26:30.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0657545 s, 15.9 MB/s 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@51 -- # local i 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:30.845 21:22:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:31.103 21:22:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:31.103 21:22:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:31.103 21:22:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:31.103 21:22:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:31.103 21:22:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:31.103 21:22:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:31.103 21:22:53 -- bdev/nbd_common.sh@41 -- # break 00:26:31.103 21:22:53 -- bdev/nbd_common.sh@45 -- # return 0 00:26:31.103 21:22:53 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:31.103 21:22:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:31.103 21:22:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:31.388 21:22:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:31.389 21:22:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:31.389 21:22:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:31.389 21:22:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:31.389 21:22:54 -- bdev/nbd_common.sh@65 -- # echo '' 00:26:31.389 21:22:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:31.389 21:22:54 -- bdev/nbd_common.sh@65 -- # true 00:26:31.389 21:22:54 -- bdev/nbd_common.sh@65 -- # count=0 00:26:31.389 21:22:54 -- bdev/nbd_common.sh@66 -- # echo 0 00:26:31.389 21:22:54 -- bdev/nbd_common.sh@104 -- # count=0 00:26:31.389 21:22:54 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:31.389 21:22:54 -- bdev/nbd_common.sh@109 -- # return 0 00:26:31.389 21:22:54 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:31.389 21:22:54 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:31.389 21:22:54 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:26:31.389 21:22:54 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:26:31.389 21:22:54 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:26:31.389 21:22:54 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:31.660 malloc_lvol_verify 00:26:31.660 21:22:54 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:31.918 4bcc6fde-e183-4ed2-93d3-10758062b931 00:26:31.918 21:22:54 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:32.176 5d9a5dec-41d9-4466-b8d9-a18d3a87fd94 00:26:32.176 21:22:54 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:32.435 /dev/nbd0 00:26:32.435 21:22:54 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:26:32.435 mke2fs 1.45.5 (07-Jan-2020) 00:26:32.435 00:26:32.435 Filesystem too small for a journal 00:26:32.435 Creating filesystem with 1024 4k blocks and 1024 inodes 00:26:32.435 00:26:32.435 Allocating group tables: 0/1 done 00:26:32.435 Writing inode tables: 0/1 done 00:26:32.435 Writing superblocks and filesystem accounting information: 0/1 done 00:26:32.435 00:26:32.435 21:22:54 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:26:32.435 21:22:54 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:32.435 21:22:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:32.435 21:22:54 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:32.435 21:22:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:32.435 21:22:54 -- bdev/nbd_common.sh@51 -- # local i 00:26:32.435 21:22:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:32.435 21:22:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:32.694 21:22:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:32.694 21:22:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:32.694 21:22:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:32.694 21:22:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:32.694 21:22:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:32.694 21:22:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:32.694 21:22:55 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:32.694 21:22:55 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:32.694 21:22:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:32.694 21:22:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:32.694 21:22:55 -- bdev/nbd_common.sh@41 -- # break 00:26:32.694 21:22:55 -- bdev/nbd_common.sh@45 -- # return 0 00:26:32.694 21:22:55 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:26:32.694 21:22:55 -- bdev/nbd_common.sh@147 -- # return 0 00:26:32.694 21:22:55 -- bdev/blockdev.sh@324 -- # killprocess 150109 00:26:32.694 21:22:55 -- common/autotest_common.sh@926 -- # '[' -z 150109 ']' 00:26:32.694 21:22:55 -- common/autotest_common.sh@930 -- # kill -0 150109 00:26:32.694 21:22:55 -- common/autotest_common.sh@931 -- # uname 00:26:32.694 21:22:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:32.694 21:22:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150109 00:26:32.694 21:22:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:32.694 21:22:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:32.694 21:22:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150109' 00:26:32.694 killing process with pid 150109 00:26:32.694 21:22:55 -- common/autotest_common.sh@945 -- # kill 150109 00:26:32.694 21:22:55 -- common/autotest_common.sh@950 -- # wait 150109 00:26:32.953 21:22:55 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:26:32.953 00:26:32.953 real 0m4.644s 00:26:32.953 user 0m7.009s 00:26:32.953 sys 0m0.968s 00:26:32.953 21:22:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:32.953 21:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:32.953 ************************************ 00:26:32.953 END TEST bdev_nbd 00:26:32.953 ************************************ 00:26:32.953 21:22:55 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:26:32.953 21:22:55 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:26:32.953 21:22:55 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:26:32.953 skipping fio tests on NVMe due to multi-ns failures. 00:26:32.953 21:22:55 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:33.212 21:22:55 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:33.212 21:22:55 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:26:33.212 21:22:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:33.212 21:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:33.212 ************************************ 00:26:33.212 START TEST bdev_verify 00:26:33.212 ************************************ 00:26:33.212 21:22:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:33.212 [2024-06-07 21:22:55.685443] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:33.212 [2024-06-07 21:22:55.686120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150297 ] 00:26:33.212 [2024-06-07 21:22:55.844787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:33.471 [2024-06-07 21:22:55.919341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.471 [2024-06-07 21:22:55.919341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.471 Running I/O for 5 seconds... 00:26:38.744 00:26:38.744 Latency(us) 00:26:38.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.744 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:38.744 Verification LBA range: start 0x0 length 0xa0000 00:26:38.744 Nvme0n1 : 5.01 18452.78 72.08 0.00 0.00 6904.81 558.55 12451.84 00:26:38.744 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:38.744 Verification LBA range: start 0xa0000 length 0xa0000 00:26:38.744 Nvme0n1 : 5.01 18419.90 71.95 0.00 0.00 6917.79 303.48 12273.11 00:26:38.744 =================================================================================================================== 00:26:38.744 Total : 36872.68 144.03 0.00 0.00 6911.30 303.48 12451.84 00:26:46.852 ************************************ 00:26:46.852 END TEST bdev_verify 00:26:46.852 ************************************ 00:26:46.852 00:26:46.852 real 0m13.759s 00:26:46.852 user 0m26.726s 00:26:46.852 sys 0m0.296s 00:26:46.852 21:23:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:46.852 21:23:09 -- common/autotest_common.sh@10 -- # set +x 00:26:46.852 21:23:09 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:46.852 21:23:09 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:26:46.852 21:23:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:46.852 21:23:09 -- common/autotest_common.sh@10 -- # set +x 00:26:46.852 ************************************ 00:26:46.852 START TEST bdev_verify_big_io 00:26:46.852 ************************************ 00:26:46.852 21:23:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:46.852 [2024-06-07 21:23:09.515186] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:46.852 [2024-06-07 21:23:09.515417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150517 ] 00:26:47.110 [2024-06-07 21:23:09.685407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:47.110 [2024-06-07 21:23:09.745543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.110 [2024-06-07 21:23:09.745550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.369 Running I/O for 5 seconds... 00:26:52.641 00:26:52.641 Latency(us) 00:26:52.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.641 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:52.641 Verification LBA range: start 0x0 length 0xa000 00:26:52.641 Nvme0n1 : 5.04 1696.24 106.02 0.00 0.00 74389.43 355.61 119632.99 00:26:52.642 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:52.642 Verification LBA range: start 0xa000 length 0xa000 00:26:52.642 Nvme0n1 : 5.04 1800.68 112.54 0.00 0.00 70068.83 173.15 104380.97 00:26:52.642 =================================================================================================================== 00:26:52.642 Total : 3496.92 218.56 0.00 0.00 72164.63 173.15 119632.99 00:26:53.211 00:26:53.211 real 0m6.153s 00:26:53.211 user 0m11.573s 00:26:53.211 sys 0m0.210s 00:26:53.211 21:23:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.211 ************************************ 00:26:53.211 END TEST bdev_verify_big_io 00:26:53.211 ************************************ 00:26:53.211 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:26:53.211 21:23:15 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:53.211 21:23:15 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:26:53.211 21:23:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:53.211 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:26:53.211 ************************************ 00:26:53.211 START TEST bdev_write_zeroes 00:26:53.211 ************************************ 00:26:53.211 21:23:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:53.211 [2024-06-07 21:23:15.709574] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:53.211 [2024-06-07 21:23:15.710007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150605 ] 00:26:53.211 [2024-06-07 21:23:15.864223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.470 [2024-06-07 21:23:15.937141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.729 Running I/O for 1 seconds... 00:26:54.665 00:26:54.665 Latency(us) 00:26:54.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.665 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:54.665 Nvme0n1 : 1.00 65200.49 254.69 0.00 0.00 1958.11 595.78 14298.76 00:26:54.665 =================================================================================================================== 00:26:54.665 Total : 65200.49 254.69 0.00 0.00 1958.11 595.78 14298.76 00:26:54.924 00:26:54.924 real 0m1.745s 00:26:54.924 user 0m1.444s 00:26:54.924 sys 0m0.201s 00:26:54.924 21:23:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.924 ************************************ 00:26:54.924 END TEST bdev_write_zeroes 00:26:54.924 21:23:17 -- common/autotest_common.sh@10 -- # set +x 00:26:54.924 ************************************ 00:26:54.924 21:23:17 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:54.924 21:23:17 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:26:54.924 21:23:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:54.924 21:23:17 -- common/autotest_common.sh@10 -- # set +x 00:26:54.924 ************************************ 00:26:54.924 START TEST bdev_json_nonenclosed 00:26:54.924 ************************************ 00:26:54.924 21:23:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:54.924 [2024-06-07 21:23:17.497163] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:54.924 [2024-06-07 21:23:17.497404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150655 ] 00:26:55.183 [2024-06-07 21:23:17.644806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.183 [2024-06-07 21:23:17.722508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.183 [2024-06-07 21:23:17.722996] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:26:55.183 [2024-06-07 21:23:17.723140] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:55.183 00:26:55.183 real 0m0.374s 00:26:55.183 user 0m0.187s 00:26:55.183 sys 0m0.086s 00:26:55.184 21:23:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.184 21:23:17 -- common/autotest_common.sh@10 -- # set +x 00:26:55.184 ************************************ 00:26:55.184 END TEST bdev_json_nonenclosed 00:26:55.184 ************************************ 00:26:55.442 21:23:17 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:55.442 21:23:17 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:26:55.442 21:23:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:55.442 21:23:17 -- common/autotest_common.sh@10 -- # set +x 00:26:55.442 ************************************ 00:26:55.442 START TEST bdev_json_nonarray 00:26:55.442 ************************************ 00:26:55.442 21:23:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:55.442 [2024-06-07 21:23:17.922754] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:55.442 [2024-06-07 21:23:17.923143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150677 ] 00:26:55.442 [2024-06-07 21:23:18.074105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.702 [2024-06-07 21:23:18.140984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.702 [2024-06-07 21:23:18.141487] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:26:55.702 [2024-06-07 21:23:18.141628] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:55.702 ************************************ 00:26:55.702 END TEST bdev_json_nonarray 00:26:55.702 ************************************ 00:26:55.702 00:26:55.702 real 0m0.372s 00:26:55.702 user 0m0.164s 00:26:55.702 sys 0m0.108s 00:26:55.702 21:23:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.702 21:23:18 -- common/autotest_common.sh@10 -- # set +x 00:26:55.702 21:23:18 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:26:55.702 21:23:18 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:26:55.702 21:23:18 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:26:55.702 21:23:18 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:26:55.702 21:23:18 -- bdev/blockdev.sh@809 -- # cleanup 00:26:55.702 21:23:18 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:26:55.702 21:23:18 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:55.702 21:23:18 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:26:55.702 21:23:18 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:26:55.702 21:23:18 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:26:55.702 21:23:18 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:26:55.702 ************************************ 00:26:55.702 END TEST blockdev_nvme 00:26:55.702 ************************************ 00:26:55.702 00:26:55.702 real 0m31.360s 00:26:55.702 user 0m53.034s 00:26:55.702 sys 0m3.015s 00:26:55.702 21:23:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.702 21:23:18 -- common/autotest_common.sh@10 -- # set +x 00:26:55.702 21:23:18 -- spdk/autotest.sh@219 -- # uname -s 00:26:55.702 21:23:18 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:26:55.702 21:23:18 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:26:55.702 21:23:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:55.702 21:23:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:55.702 21:23:18 -- common/autotest_common.sh@10 -- # set +x 00:26:55.702 ************************************ 00:26:55.702 START TEST blockdev_nvme_gpt 00:26:55.702 ************************************ 00:26:55.702 21:23:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:26:55.961 * Looking for test storage... 00:26:55.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:26:55.961 21:23:18 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:55.961 21:23:18 -- bdev/nbd_common.sh@6 -- # set -e 00:26:55.961 21:23:18 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:55.961 21:23:18 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:55.961 21:23:18 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:26:55.961 21:23:18 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:26:55.961 21:23:18 -- bdev/blockdev.sh@18 -- # : 00:26:55.961 21:23:18 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:26:55.961 21:23:18 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:26:55.961 21:23:18 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:26:55.961 21:23:18 -- bdev/blockdev.sh@672 -- # uname -s 00:26:55.961 21:23:18 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:26:55.961 21:23:18 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:26:55.961 21:23:18 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:26:55.961 21:23:18 -- bdev/blockdev.sh@681 -- # crypto_device= 00:26:55.961 21:23:18 -- bdev/blockdev.sh@682 -- # dek= 00:26:55.961 21:23:18 -- bdev/blockdev.sh@683 -- # env_ctx= 00:26:55.961 21:23:18 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:26:55.961 21:23:18 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:26:55.961 21:23:18 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:26:55.962 21:23:18 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:26:55.962 21:23:18 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:26:55.962 21:23:18 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=150761 00:26:55.962 21:23:18 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:55.962 21:23:18 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:55.962 21:23:18 -- bdev/blockdev.sh@47 -- # waitforlisten 150761 00:26:55.962 21:23:18 -- common/autotest_common.sh@819 -- # '[' -z 150761 ']' 00:26:55.962 21:23:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.962 21:23:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:55.962 21:23:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.962 21:23:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:55.962 21:23:18 -- common/autotest_common.sh@10 -- # set +x 00:26:55.962 [2024-06-07 21:23:18.475700] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:55.962 [2024-06-07 21:23:18.476047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150761 ] 00:26:55.962 [2024-06-07 21:23:18.628182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.222 [2024-06-07 21:23:18.706208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:56.222 [2024-06-07 21:23:18.706723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.790 21:23:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:56.790 21:23:19 -- common/autotest_common.sh@852 -- # return 0 00:26:56.790 21:23:19 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:26:56.790 21:23:19 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:26:56.790 21:23:19 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:57.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:26:57.050 Waiting for block devices as requested 00:26:57.050 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:57.310 21:23:19 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:26:57.310 21:23:19 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:26:57.310 21:23:19 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:26:57.310 21:23:19 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:26:57.310 21:23:19 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:26:57.310 21:23:19 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:26:57.310 21:23:19 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:26:57.310 21:23:19 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:57.310 21:23:19 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:26:57.310 21:23:19 -- bdev/blockdev.sh@105 -- # nvme_devs=(/sys/bus/pci/drivers/nvme/*/nvme/nvme*/nvme*n*) 00:26:57.310 21:23:19 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:26:57.310 21:23:19 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:26:57.310 21:23:19 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:26:57.310 21:23:19 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:26:57.310 21:23:19 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:26:57.310 21:23:19 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:26:57.310 21:23:19 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:26:57.310 BYT; 00:26:57.310 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:26:57.310 21:23:19 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:26:57.310 BYT; 00:26:57.310 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:26:57.310 21:23:19 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:26:57.310 21:23:19 -- bdev/blockdev.sh@114 -- # break 00:26:57.310 21:23:19 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:26:57.310 21:23:19 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:26:57.310 21:23:19 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:26:57.310 21:23:19 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:26:57.879 21:23:20 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:26:57.879 21:23:20 -- scripts/common.sh@410 -- # local spdk_guid 00:26:57.879 21:23:20 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:26:57.879 21:23:20 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:26:57.879 21:23:20 -- scripts/common.sh@415 -- # IFS='()' 00:26:57.879 21:23:20 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:26:57.879 21:23:20 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:26:57.879 21:23:20 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:26:57.879 21:23:20 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:26:57.879 21:23:20 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:26:57.879 21:23:20 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:26:57.879 21:23:20 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:26:57.879 21:23:20 -- scripts/common.sh@422 -- # local spdk_guid 00:26:57.879 21:23:20 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:26:57.879 21:23:20 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:26:57.879 21:23:20 -- scripts/common.sh@427 -- # IFS='()' 00:26:57.879 21:23:20 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:26:57.879 21:23:20 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:26:57.879 21:23:20 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:26:57.879 21:23:20 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:26:57.879 21:23:20 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:26:57.879 21:23:20 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:26:57.879 21:23:20 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:26:59.318 The operation has completed successfully. 00:26:59.318 21:23:21 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:27:00.255 The operation has completed successfully. 00:27:00.255 21:23:22 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:00.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:00.525 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:27:01.470 21:23:23 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:27:01.470 21:23:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.470 21:23:23 -- common/autotest_common.sh@10 -- # set +x 00:27:01.470 [] 00:27:01.470 21:23:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.470 21:23:23 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:27:01.470 21:23:23 -- bdev/blockdev.sh@79 -- # local json 00:27:01.470 21:23:23 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:27:01.470 21:23:23 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:01.470 21:23:23 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:27:01.470 21:23:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.470 21:23:23 -- common/autotest_common.sh@10 -- # set +x 00:27:01.470 21:23:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.470 21:23:24 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:27:01.470 21:23:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.470 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:27:01.470 21:23:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.470 21:23:24 -- bdev/blockdev.sh@738 -- # cat 00:27:01.470 21:23:24 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:27:01.470 21:23:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.470 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:27:01.470 21:23:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.470 21:23:24 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:27:01.470 21:23:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.470 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:27:01.470 21:23:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.470 21:23:24 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:01.470 21:23:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.470 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:27:01.470 21:23:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.470 21:23:24 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:27:01.470 21:23:24 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:27:01.470 21:23:24 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:27:01.470 21:23:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.470 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:27:01.470 21:23:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.730 21:23:24 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:27:01.730 21:23:24 -- bdev/blockdev.sh@747 -- # jq -r .name 00:27:01.731 21:23:24 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:27:01.731 21:23:24 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:27:01.731 21:23:24 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:27:01.731 21:23:24 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:27:01.731 21:23:24 -- bdev/blockdev.sh@752 -- # killprocess 150761 00:27:01.731 21:23:24 -- common/autotest_common.sh@926 -- # '[' -z 150761 ']' 00:27:01.731 21:23:24 -- common/autotest_common.sh@930 -- # kill -0 150761 00:27:01.731 21:23:24 -- common/autotest_common.sh@931 -- # uname 00:27:01.731 21:23:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:01.731 21:23:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150761 00:27:01.731 killing process with pid 150761 00:27:01.731 21:23:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:01.731 21:23:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:01.731 21:23:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150761' 00:27:01.731 21:23:24 -- common/autotest_common.sh@945 -- # kill 150761 00:27:01.731 21:23:24 -- common/autotest_common.sh@950 -- # wait 150761 00:27:01.990 21:23:24 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:01.990 21:23:24 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:27:01.990 21:23:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:27:01.990 21:23:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:01.990 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:27:01.990 ************************************ 00:27:01.990 START TEST bdev_hello_world 00:27:01.990 ************************************ 00:27:01.990 21:23:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:27:02.249 [2024-06-07 21:23:24.702686] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:02.249 [2024-06-07 21:23:24.703086] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151298 ] 00:27:02.249 [2024-06-07 21:23:24.869962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.509 [2024-06-07 21:23:24.948838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.509 [2024-06-07 21:23:25.159101] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:02.509 [2024-06-07 21:23:25.159384] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:27:02.509 [2024-06-07 21:23:25.159474] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:02.509 [2024-06-07 21:23:25.161932] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:02.509 [2024-06-07 21:23:25.162499] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:02.509 [2024-06-07 21:23:25.162668] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:02.509 [2024-06-07 21:23:25.163011] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:02.509 00:27:02.509 [2024-06-07 21:23:25.163213] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:02.768 ************************************ 00:27:02.768 END TEST bdev_hello_world 00:27:02.768 ************************************ 00:27:02.768 00:27:02.768 real 0m0.751s 00:27:02.768 user 0m0.449s 00:27:02.768 sys 0m0.202s 00:27:02.768 21:23:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.768 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:27:03.028 21:23:25 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:27:03.028 21:23:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:03.028 21:23:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:03.028 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:27:03.028 ************************************ 00:27:03.028 START TEST bdev_bounds 00:27:03.028 ************************************ 00:27:03.028 Process bdevio pid: 151330 00:27:03.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.028 21:23:25 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:27:03.028 21:23:25 -- bdev/blockdev.sh@288 -- # bdevio_pid=151330 00:27:03.028 21:23:25 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:03.028 21:23:25 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 151330' 00:27:03.028 21:23:25 -- bdev/blockdev.sh@291 -- # waitforlisten 151330 00:27:03.028 21:23:25 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:03.028 21:23:25 -- common/autotest_common.sh@819 -- # '[' -z 151330 ']' 00:27:03.028 21:23:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.028 21:23:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:03.028 21:23:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.028 21:23:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:03.028 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:27:03.028 [2024-06-07 21:23:25.508194] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:03.028 [2024-06-07 21:23:25.508671] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151330 ] 00:27:03.028 [2024-06-07 21:23:25.679769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:03.287 [2024-06-07 21:23:25.737861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.287 [2024-06-07 21:23:25.737998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.287 [2024-06-07 21:23:25.737994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:03.855 21:23:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:03.855 21:23:26 -- common/autotest_common.sh@852 -- # return 0 00:27:03.855 21:23:26 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:03.855 I/O targets: 00:27:03.855 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:27:03.855 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:27:03.855 00:27:03.855 00:27:03.855 CUnit - A unit testing framework for C - Version 2.1-3 00:27:03.855 http://cunit.sourceforge.net/ 00:27:03.855 00:27:03.855 00:27:03.855 Suite: bdevio tests on: Nvme0n1p2 00:27:03.855 Test: blockdev write read block ...passed 00:27:03.855 Test: blockdev write zeroes read block ...passed 00:27:03.855 Test: blockdev write zeroes read no split ...passed 00:27:03.855 Test: blockdev write zeroes read split ...passed 00:27:03.855 Test: blockdev write zeroes read split partial ...passed 00:27:03.855 Test: blockdev reset ...[2024-06-07 21:23:26.513898] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:03.855 [2024-06-07 21:23:26.516037] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:03.855 passed 00:27:03.855 Test: blockdev write read 8 blocks ...passed 00:27:03.855 Test: blockdev write read size > 128k ...passed 00:27:03.855 Test: blockdev write read invalid size ...passed 00:27:03.855 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:03.855 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:03.855 Test: blockdev write read max offset ...passed 00:27:03.855 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:03.855 Test: blockdev writev readv 8 blocks ...passed 00:27:03.855 Test: blockdev writev readv 30 x 1block ...passed 00:27:03.855 Test: blockdev writev readv block ...passed 00:27:03.855 Test: blockdev writev readv size > 128k ...passed 00:27:03.855 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:03.855 Test: blockdev comparev and writev ...[2024-06-07 21:23:26.524328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x4320b000 len:0x1000 00:27:03.855 [2024-06-07 21:23:26.524567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:27:03.855 passed 00:27:03.855 Test: blockdev nvme passthru rw ...passed 00:27:03.855 Test: blockdev nvme passthru vendor specific ...passed 00:27:03.855 Test: blockdev nvme admin passthru ...passed 00:27:03.855 Test: blockdev copy ...passed 00:27:03.855 Suite: bdevio tests on: Nvme0n1p1 00:27:03.855 Test: blockdev write read block ...passed 00:27:03.855 Test: blockdev write zeroes read block ...passed 00:27:03.855 Test: blockdev write zeroes read no split ...passed 00:27:04.114 Test: blockdev write zeroes read split ...passed 00:27:04.114 Test: blockdev write zeroes read split partial ...passed 00:27:04.114 Test: blockdev reset ...[2024-06-07 21:23:26.538478] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:04.114 [2024-06-07 21:23:26.540259] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:04.114 passed 00:27:04.114 Test: blockdev write read 8 blocks ...passed 00:27:04.114 Test: blockdev write read size > 128k ...passed 00:27:04.114 Test: blockdev write read invalid size ...passed 00:27:04.114 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:04.114 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:04.114 Test: blockdev write read max offset ...passed 00:27:04.114 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:04.114 Test: blockdev writev readv 8 blocks ...passed 00:27:04.114 Test: blockdev writev readv 30 x 1block ...passed 00:27:04.114 Test: blockdev writev readv block ...passed 00:27:04.114 Test: blockdev writev readv size > 128k ...passed 00:27:04.114 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:04.114 Test: blockdev comparev and writev ...[2024-06-07 21:23:26.547973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x4320d000 len:0x1000 00:27:04.114 [2024-06-07 21:23:26.548188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:27:04.114 passed 00:27:04.114 Test: blockdev nvme passthru rw ...passed 00:27:04.114 Test: blockdev nvme passthru vendor specific ...passed 00:27:04.114 Test: blockdev nvme admin passthru ...passed 00:27:04.114 Test: blockdev copy ...passed 00:27:04.114 00:27:04.114 Run Summary: Type Total Ran Passed Failed Inactive 00:27:04.114 suites 2 2 n/a 0 0 00:27:04.114 tests 46 46 46 0 0 00:27:04.114 asserts 284 284 284 0 n/a 00:27:04.114 00:27:04.114 Elapsed time = 0.118 seconds 00:27:04.114 0 00:27:04.114 21:23:26 -- bdev/blockdev.sh@293 -- # killprocess 151330 00:27:04.114 21:23:26 -- common/autotest_common.sh@926 -- # '[' -z 151330 ']' 00:27:04.114 21:23:26 -- common/autotest_common.sh@930 -- # kill -0 151330 00:27:04.114 21:23:26 -- common/autotest_common.sh@931 -- # uname 00:27:04.114 21:23:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:04.114 21:23:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 151330 00:27:04.114 21:23:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:04.114 21:23:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:04.114 killing process with pid 151330 00:27:04.114 21:23:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 151330' 00:27:04.114 21:23:26 -- common/autotest_common.sh@945 -- # kill 151330 00:27:04.114 21:23:26 -- common/autotest_common.sh@950 -- # wait 151330 00:27:04.372 ************************************ 00:27:04.372 END TEST bdev_bounds 00:27:04.372 ************************************ 00:27:04.372 21:23:26 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:27:04.372 00:27:04.372 real 0m1.337s 00:27:04.372 user 0m3.375s 00:27:04.372 sys 0m0.256s 00:27:04.372 21:23:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.372 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:27:04.372 21:23:26 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:27:04.372 21:23:26 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:27:04.372 21:23:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:04.372 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:27:04.372 ************************************ 00:27:04.372 START TEST bdev_nbd 00:27:04.372 ************************************ 00:27:04.372 21:23:26 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:27:04.373 21:23:26 -- bdev/blockdev.sh@298 -- # uname -s 00:27:04.373 21:23:26 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:27:04.373 21:23:26 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:04.373 21:23:26 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:04.373 21:23:26 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:27:04.373 21:23:26 -- bdev/blockdev.sh@302 -- # local bdev_all 00:27:04.373 21:23:26 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:27:04.373 21:23:26 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:27:04.373 21:23:26 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:27:04.373 21:23:26 -- bdev/blockdev.sh@309 -- # local nbd_all 00:27:04.373 21:23:26 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:27:04.373 21:23:26 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:27:04.373 21:23:26 -- bdev/blockdev.sh@312 -- # local nbd_list 00:27:04.373 21:23:26 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:27:04.373 21:23:26 -- bdev/blockdev.sh@313 -- # local bdev_list 00:27:04.373 21:23:26 -- bdev/blockdev.sh@316 -- # nbd_pid=151380 00:27:04.373 21:23:26 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:04.373 21:23:26 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:04.373 21:23:26 -- bdev/blockdev.sh@318 -- # waitforlisten 151380 /var/tmp/spdk-nbd.sock 00:27:04.373 21:23:26 -- common/autotest_common.sh@819 -- # '[' -z 151380 ']' 00:27:04.373 21:23:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:04.373 21:23:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:04.373 21:23:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:04.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:04.373 21:23:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:04.373 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:27:04.373 [2024-06-07 21:23:26.903463] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:04.373 [2024-06-07 21:23:26.903919] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.630 [2024-06-07 21:23:27.057773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.630 [2024-06-07 21:23:27.116298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.196 21:23:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:05.196 21:23:27 -- common/autotest_common.sh@852 -- # return 0 00:27:05.196 21:23:27 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:27:05.196 21:23:27 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:05.196 21:23:27 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:27:05.196 21:23:27 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:05.196 21:23:27 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:27:05.196 21:23:27 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:05.196 21:23:27 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:27:05.196 21:23:27 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:05.196 21:23:27 -- bdev/nbd_common.sh@24 -- # local i 00:27:05.196 21:23:27 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:05.196 21:23:27 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:05.196 21:23:27 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:27:05.196 21:23:27 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:27:05.455 21:23:28 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:05.455 21:23:28 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:05.455 21:23:28 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:05.455 21:23:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:27:05.455 21:23:28 -- common/autotest_common.sh@857 -- # local i 00:27:05.455 21:23:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:05.455 21:23:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:05.455 21:23:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:27:05.455 21:23:28 -- common/autotest_common.sh@861 -- # break 00:27:05.455 21:23:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:05.455 21:23:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:05.455 21:23:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:05.455 1+0 records in 00:27:05.455 1+0 records out 00:27:05.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000887293 s, 4.6 MB/s 00:27:05.455 21:23:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:05.455 21:23:28 -- common/autotest_common.sh@874 -- # size=4096 00:27:05.455 21:23:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:05.455 21:23:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:05.455 21:23:28 -- common/autotest_common.sh@877 -- # return 0 00:27:05.455 21:23:28 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:05.455 21:23:28 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:27:05.455 21:23:28 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:27:05.714 21:23:28 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:27:05.714 21:23:28 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:27:05.714 21:23:28 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:27:05.714 21:23:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:27:05.714 21:23:28 -- common/autotest_common.sh@857 -- # local i 00:27:05.714 21:23:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:05.714 21:23:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:05.714 21:23:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:27:05.714 21:23:28 -- common/autotest_common.sh@861 -- # break 00:27:05.714 21:23:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:05.714 21:23:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:05.714 21:23:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:05.714 1+0 records in 00:27:05.714 1+0 records out 00:27:05.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000831409 s, 4.9 MB/s 00:27:05.714 21:23:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:05.714 21:23:28 -- common/autotest_common.sh@874 -- # size=4096 00:27:05.714 21:23:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:05.714 21:23:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:05.714 21:23:28 -- common/autotest_common.sh@877 -- # return 0 00:27:05.714 21:23:28 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:05.714 21:23:28 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:27:05.714 21:23:28 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:05.973 21:23:28 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:05.973 { 00:27:05.973 "nbd_device": "/dev/nbd0", 00:27:05.973 "bdev_name": "Nvme0n1p1" 00:27:05.973 }, 00:27:05.973 { 00:27:05.973 "nbd_device": "/dev/nbd1", 00:27:05.973 "bdev_name": "Nvme0n1p2" 00:27:05.973 } 00:27:05.973 ]' 00:27:05.973 21:23:28 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:05.973 21:23:28 -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:05.973 { 00:27:05.973 "nbd_device": "/dev/nbd0", 00:27:05.973 "bdev_name": "Nvme0n1p1" 00:27:05.973 }, 00:27:05.973 { 00:27:05.973 "nbd_device": "/dev/nbd1", 00:27:05.973 "bdev_name": "Nvme0n1p2" 00:27:05.973 } 00:27:05.973 ]' 00:27:05.973 21:23:28 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:05.973 21:23:28 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:05.973 21:23:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:05.973 21:23:28 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:27:05.973 21:23:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:05.973 21:23:28 -- bdev/nbd_common.sh@51 -- # local i 00:27:05.973 21:23:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:05.973 21:23:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:06.232 21:23:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:06.232 21:23:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:06.232 21:23:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:06.232 21:23:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:06.232 21:23:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:06.232 21:23:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:06.232 21:23:28 -- bdev/nbd_common.sh@41 -- # break 00:27:06.232 21:23:28 -- bdev/nbd_common.sh@45 -- # return 0 00:27:06.232 21:23:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:06.232 21:23:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:06.491 21:23:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:06.491 21:23:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:06.491 21:23:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:06.491 21:23:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:06.491 21:23:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:06.491 21:23:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:06.491 21:23:29 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:27:06.749 21:23:29 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:27:06.749 21:23:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:06.749 21:23:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:06.749 21:23:29 -- bdev/nbd_common.sh@41 -- # break 00:27:06.749 21:23:29 -- bdev/nbd_common.sh@45 -- # return 0 00:27:06.749 21:23:29 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:06.749 21:23:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:06.749 21:23:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@65 -- # true 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@65 -- # count=0 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@122 -- # count=0 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@127 -- # return 0 00:27:07.008 21:23:29 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@12 -- # local i 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:07.008 21:23:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:27:07.269 /dev/nbd0 00:27:07.269 21:23:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:07.269 21:23:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:07.269 21:23:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:27:07.269 21:23:29 -- common/autotest_common.sh@857 -- # local i 00:27:07.269 21:23:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:07.269 21:23:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:07.269 21:23:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:27:07.269 21:23:29 -- common/autotest_common.sh@861 -- # break 00:27:07.269 21:23:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:07.269 21:23:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:07.269 21:23:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:07.269 1+0 records in 00:27:07.269 1+0 records out 00:27:07.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00096278 s, 4.3 MB/s 00:27:07.269 21:23:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:07.269 21:23:29 -- common/autotest_common.sh@874 -- # size=4096 00:27:07.269 21:23:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:07.269 21:23:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:07.269 21:23:29 -- common/autotest_common.sh@877 -- # return 0 00:27:07.269 21:23:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:07.269 21:23:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:07.269 21:23:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:27:07.533 /dev/nbd1 00:27:07.533 21:23:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:07.533 21:23:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:07.533 21:23:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:27:07.533 21:23:30 -- common/autotest_common.sh@857 -- # local i 00:27:07.533 21:23:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:07.533 21:23:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:07.533 21:23:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:27:07.533 21:23:30 -- common/autotest_common.sh@861 -- # break 00:27:07.533 21:23:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:07.533 21:23:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:07.533 21:23:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:07.533 1+0 records in 00:27:07.533 1+0 records out 00:27:07.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000997749 s, 4.1 MB/s 00:27:07.533 21:23:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:07.533 21:23:30 -- common/autotest_common.sh@874 -- # size=4096 00:27:07.533 21:23:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:07.533 21:23:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:07.533 21:23:30 -- common/autotest_common.sh@877 -- # return 0 00:27:07.533 21:23:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:07.533 21:23:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:07.533 21:23:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:07.533 21:23:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:07.533 21:23:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:07.792 { 00:27:07.792 "nbd_device": "/dev/nbd0", 00:27:07.792 "bdev_name": "Nvme0n1p1" 00:27:07.792 }, 00:27:07.792 { 00:27:07.792 "nbd_device": "/dev/nbd1", 00:27:07.792 "bdev_name": "Nvme0n1p2" 00:27:07.792 } 00:27:07.792 ]' 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:07.792 { 00:27:07.792 "nbd_device": "/dev/nbd0", 00:27:07.792 "bdev_name": "Nvme0n1p1" 00:27:07.792 }, 00:27:07.792 { 00:27:07.792 "nbd_device": "/dev/nbd1", 00:27:07.792 "bdev_name": "Nvme0n1p2" 00:27:07.792 } 00:27:07.792 ]' 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:07.792 /dev/nbd1' 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:07.792 /dev/nbd1' 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@65 -- # count=2 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@66 -- # echo 2 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@95 -- # count=2 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:07.792 256+0 records in 00:27:07.792 256+0 records out 00:27:07.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103536 s, 101 MB/s 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:07.792 21:23:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:08.050 256+0 records in 00:27:08.051 256+0 records out 00:27:08.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138833 s, 7.6 MB/s 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:08.051 256+0 records in 00:27:08.051 256+0 records out 00:27:08.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0924972 s, 11.3 MB/s 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@51 -- # local i 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:08.051 21:23:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:08.309 21:23:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:08.309 21:23:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:08.309 21:23:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:08.309 21:23:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:08.309 21:23:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:08.309 21:23:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:08.309 21:23:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:27:08.568 21:23:31 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:27:08.568 21:23:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:08.568 21:23:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:08.568 21:23:31 -- bdev/nbd_common.sh@41 -- # break 00:27:08.568 21:23:31 -- bdev/nbd_common.sh@45 -- # return 0 00:27:08.568 21:23:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:08.568 21:23:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:08.827 21:23:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:08.827 21:23:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:08.827 21:23:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:08.827 21:23:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:08.827 21:23:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:08.827 21:23:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:08.827 21:23:31 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:27:08.827 21:23:31 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:27:08.827 21:23:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:08.827 21:23:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:08.827 21:23:31 -- bdev/nbd_common.sh@41 -- # break 00:27:08.827 21:23:31 -- bdev/nbd_common.sh@45 -- # return 0 00:27:08.827 21:23:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:08.827 21:23:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:08.827 21:23:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:09.085 21:23:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:09.085 21:23:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:09.085 21:23:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:09.344 21:23:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:09.344 21:23:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:09.344 21:23:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:09.344 21:23:31 -- bdev/nbd_common.sh@65 -- # true 00:27:09.344 21:23:31 -- bdev/nbd_common.sh@65 -- # count=0 00:27:09.344 21:23:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:09.344 21:23:31 -- bdev/nbd_common.sh@104 -- # count=0 00:27:09.344 21:23:31 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:09.344 21:23:31 -- bdev/nbd_common.sh@109 -- # return 0 00:27:09.344 21:23:31 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:09.344 21:23:31 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:09.344 21:23:31 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:27:09.344 21:23:31 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:27:09.344 21:23:31 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:27:09.344 21:23:31 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:09.344 malloc_lvol_verify 00:27:09.602 21:23:32 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:09.602 e802a62b-8ba0-4d83-aa0d-a4a85faa375e 00:27:09.602 21:23:32 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:09.860 2d65642f-7b7c-44cc-baa0-c9e0f00ee5e5 00:27:09.860 21:23:32 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:10.118 /dev/nbd0 00:27:10.118 21:23:32 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:27:10.118 mke2fs 1.45.5 (07-Jan-2020) 00:27:10.118 00:27:10.118 Filesystem too small for a journal 00:27:10.118 Creating filesystem with 1024 4k blocks and 1024 inodes 00:27:10.118 00:27:10.118 Allocating group tables: 0/1 done 00:27:10.118 Writing inode tables: 0/1 done 00:27:10.118 Writing superblocks and filesystem accounting information: 0/1 done 00:27:10.118 00:27:10.118 21:23:32 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:27:10.118 21:23:32 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:10.118 21:23:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:10.118 21:23:32 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:27:10.118 21:23:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:10.118 21:23:32 -- bdev/nbd_common.sh@51 -- # local i 00:27:10.118 21:23:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:10.118 21:23:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:10.377 21:23:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:10.377 21:23:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:10.377 21:23:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:10.377 21:23:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:10.377 21:23:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:10.377 21:23:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:10.377 21:23:32 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:27:10.377 21:23:32 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:27:10.377 21:23:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:10.377 21:23:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:10.377 21:23:32 -- bdev/nbd_common.sh@41 -- # break 00:27:10.377 21:23:32 -- bdev/nbd_common.sh@45 -- # return 0 00:27:10.377 21:23:32 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:27:10.377 21:23:32 -- bdev/nbd_common.sh@147 -- # return 0 00:27:10.377 21:23:32 -- bdev/blockdev.sh@324 -- # killprocess 151380 00:27:10.377 21:23:32 -- common/autotest_common.sh@926 -- # '[' -z 151380 ']' 00:27:10.377 21:23:32 -- common/autotest_common.sh@930 -- # kill -0 151380 00:27:10.377 21:23:32 -- common/autotest_common.sh@931 -- # uname 00:27:10.377 21:23:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:10.377 21:23:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 151380 00:27:10.377 21:23:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:10.377 21:23:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:10.377 21:23:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 151380' 00:27:10.377 killing process with pid 151380 00:27:10.377 21:23:33 -- common/autotest_common.sh@945 -- # kill 151380 00:27:10.377 21:23:33 -- common/autotest_common.sh@950 -- # wait 151380 00:27:10.636 21:23:33 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:27:10.636 00:27:10.636 real 0m6.404s 00:27:10.636 user 0m9.401s 00:27:10.636 sys 0m1.612s 00:27:10.636 21:23:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.636 21:23:33 -- common/autotest_common.sh@10 -- # set +x 00:27:10.636 ************************************ 00:27:10.636 END TEST bdev_nbd 00:27:10.636 ************************************ 00:27:10.636 21:23:33 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:27:10.636 21:23:33 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:27:10.637 21:23:33 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:27:10.637 21:23:33 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:27:10.637 skipping fio tests on NVMe due to multi-ns failures. 00:27:10.637 21:23:33 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:10.637 21:23:33 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:10.637 21:23:33 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:27:10.637 21:23:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:10.637 21:23:33 -- common/autotest_common.sh@10 -- # set +x 00:27:10.896 ************************************ 00:27:10.896 START TEST bdev_verify 00:27:10.896 ************************************ 00:27:10.896 21:23:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:10.896 [2024-06-07 21:23:33.374300] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:10.896 [2024-06-07 21:23:33.374696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151659 ] 00:27:10.896 [2024-06-07 21:23:33.549446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:11.154 [2024-06-07 21:23:33.629419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.154 [2024-06-07 21:23:33.629433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.413 Running I/O for 5 seconds... 00:27:16.703 00:27:16.703 Latency(us) 00:27:16.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.703 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:16.703 Verification LBA range: start 0x0 length 0x4ff80 00:27:16.703 Nvme0n1p1 : 5.01 7965.03 31.11 0.00 0.00 16031.30 1750.11 24188.74 00:27:16.703 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:16.703 Verification LBA range: start 0x4ff80 length 0x4ff80 00:27:16.703 Nvme0n1p1 : 5.01 7929.03 30.97 0.00 0.00 16098.34 2636.33 25380.31 00:27:16.703 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:16.703 Verification LBA range: start 0x0 length 0x4ff7f 00:27:16.703 Nvme0n1p2 : 5.01 7962.10 31.10 0.00 0.00 16023.74 2278.87 22997.18 00:27:16.703 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:16.703 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:27:16.703 Nvme0n1p2 : 5.02 7940.50 31.02 0.00 0.00 16062.50 318.37 20256.58 00:27:16.703 =================================================================================================================== 00:27:16.703 Total : 31796.66 124.21 0.00 0.00 16053.91 318.37 25380.31 00:27:20.960 ************************************ 00:27:20.960 END TEST bdev_verify 00:27:20.960 ************************************ 00:27:20.960 00:27:20.960 real 0m9.586s 00:27:20.960 user 0m18.365s 00:27:20.960 sys 0m0.287s 00:27:20.960 21:23:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.960 21:23:42 -- common/autotest_common.sh@10 -- # set +x 00:27:20.960 21:23:42 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:20.960 21:23:42 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:27:20.960 21:23:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:20.960 21:23:42 -- common/autotest_common.sh@10 -- # set +x 00:27:20.960 ************************************ 00:27:20.960 START TEST bdev_verify_big_io 00:27:20.960 ************************************ 00:27:20.960 21:23:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:20.960 [2024-06-07 21:23:43.013917] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:20.960 [2024-06-07 21:23:43.014403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151814 ] 00:27:20.960 [2024-06-07 21:23:43.183100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:20.960 [2024-06-07 21:23:43.266421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.960 [2024-06-07 21:23:43.266426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.960 Running I/O for 5 seconds... 00:27:26.230 00:27:26.230 Latency(us) 00:27:26.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.230 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:26.230 Verification LBA range: start 0x0 length 0x4ff8 00:27:26.230 Nvme0n1p1 : 5.13 815.08 50.94 0.00 0.00 154391.52 22401.40 242125.73 00:27:26.231 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:26.231 Verification LBA range: start 0x4ff8 length 0x4ff8 00:27:26.231 Nvme0n1p1 : 5.13 815.32 50.96 0.00 0.00 154346.98 22043.93 239265.98 00:27:26.231 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:26.231 Verification LBA range: start 0x0 length 0x4ff7 00:27:26.231 Nvme0n1p2 : 5.13 830.77 51.92 0.00 0.00 149975.38 878.78 177304.67 00:27:26.231 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:26.231 Verification LBA range: start 0x4ff7 length 0x4ff7 00:27:26.231 Nvme0n1p2 : 5.13 830.90 51.93 0.00 0.00 150121.53 1131.99 179211.17 00:27:26.231 =================================================================================================================== 00:27:26.231 Total : 3292.08 205.75 0.00 0.00 152187.11 878.78 242125.73 00:27:26.489 ************************************ 00:27:26.489 END TEST bdev_verify_big_io 00:27:26.489 ************************************ 00:27:26.489 00:27:26.489 real 0m6.192s 00:27:26.489 user 0m11.605s 00:27:26.489 sys 0m0.225s 00:27:26.489 21:23:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:26.489 21:23:49 -- common/autotest_common.sh@10 -- # set +x 00:27:26.748 21:23:49 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:26.748 21:23:49 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:27:26.748 21:23:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:26.748 21:23:49 -- common/autotest_common.sh@10 -- # set +x 00:27:26.749 ************************************ 00:27:26.749 START TEST bdev_write_zeroes 00:27:26.749 ************************************ 00:27:26.749 21:23:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:26.749 [2024-06-07 21:23:49.252978] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:26.749 [2024-06-07 21:23:49.254154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151908 ] 00:27:26.749 [2024-06-07 21:23:49.416728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.007 [2024-06-07 21:23:49.479341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.265 Running I/O for 1 seconds... 00:27:28.199 00:27:28.199 Latency(us) 00:27:28.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.199 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:28.199 Nvme0n1p1 : 1.01 25647.65 100.19 0.00 0.00 4980.26 2338.44 17396.83 00:27:28.199 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:28.199 Nvme0n1p2 : 1.01 25680.58 100.31 0.00 0.00 4965.94 2621.44 12928.47 00:27:28.199 =================================================================================================================== 00:27:28.199 Total : 51328.23 200.50 0.00 0.00 4973.09 2338.44 17396.83 00:27:28.470 ************************************ 00:27:28.470 END TEST bdev_write_zeroes 00:27:28.470 ************************************ 00:27:28.470 00:27:28.470 real 0m1.754s 00:27:28.470 user 0m1.472s 00:27:28.470 sys 0m0.180s 00:27:28.470 21:23:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.470 21:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:28.470 21:23:50 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:28.470 21:23:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:27:28.470 21:23:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:28.470 21:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:28.470 ************************************ 00:27:28.470 START TEST bdev_json_nonenclosed 00:27:28.470 ************************************ 00:27:28.470 21:23:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:28.470 [2024-06-07 21:23:51.055628] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:28.470 [2024-06-07 21:23:51.056082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151966 ] 00:27:28.740 [2024-06-07 21:23:51.221969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.740 [2024-06-07 21:23:51.285803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.740 [2024-06-07 21:23:51.286323] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:28.740 [2024-06-07 21:23:51.286489] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:28.740 ************************************ 00:27:28.740 END TEST bdev_json_nonenclosed 00:27:28.740 ************************************ 00:27:28.740 00:27:28.740 real 0m0.378s 00:27:28.740 user 0m0.142s 00:27:28.740 sys 0m0.136s 00:27:28.740 21:23:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.740 21:23:51 -- common/autotest_common.sh@10 -- # set +x 00:27:29.008 21:23:51 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:29.008 21:23:51 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:27:29.008 21:23:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:29.008 21:23:51 -- common/autotest_common.sh@10 -- # set +x 00:27:29.008 ************************************ 00:27:29.008 START TEST bdev_json_nonarray 00:27:29.008 ************************************ 00:27:29.008 21:23:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:29.008 [2024-06-07 21:23:51.483764] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:29.008 [2024-06-07 21:23:51.484377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151993 ] 00:27:29.008 [2024-06-07 21:23:51.651213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.276 [2024-06-07 21:23:51.735438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.276 [2024-06-07 21:23:51.735815] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:29.276 [2024-06-07 21:23:51.735994] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:29.276 ************************************ 00:27:29.276 END TEST bdev_json_nonarray 00:27:29.276 ************************************ 00:27:29.276 00:27:29.276 real 0m0.413s 00:27:29.276 user 0m0.195s 00:27:29.276 sys 0m0.116s 00:27:29.276 21:23:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.276 21:23:51 -- common/autotest_common.sh@10 -- # set +x 00:27:29.276 21:23:51 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:27:29.276 21:23:51 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:27:29.276 21:23:51 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:27:29.276 21:23:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:29.276 21:23:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:29.276 21:23:51 -- common/autotest_common.sh@10 -- # set +x 00:27:29.276 ************************************ 00:27:29.276 START TEST bdev_gpt_uuid 00:27:29.276 ************************************ 00:27:29.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.276 21:23:51 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:27:29.276 21:23:51 -- bdev/blockdev.sh@612 -- # local bdev 00:27:29.276 21:23:51 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:27:29.276 21:23:51 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=152028 00:27:29.276 21:23:51 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:29.276 21:23:51 -- bdev/blockdev.sh@47 -- # waitforlisten 152028 00:27:29.276 21:23:51 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:29.276 21:23:51 -- common/autotest_common.sh@819 -- # '[' -z 152028 ']' 00:27:29.276 21:23:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.276 21:23:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:29.276 21:23:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.276 21:23:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:29.276 21:23:51 -- common/autotest_common.sh@10 -- # set +x 00:27:29.542 [2024-06-07 21:23:51.963820] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:29.542 [2024-06-07 21:23:51.964815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152028 ] 00:27:29.542 [2024-06-07 21:23:52.130458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.542 [2024-06-07 21:23:52.190304] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:29.542 [2024-06-07 21:23:52.190822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.480 21:23:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:30.480 21:23:52 -- common/autotest_common.sh@852 -- # return 0 00:27:30.480 21:23:52 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:30.480 21:23:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.480 21:23:52 -- common/autotest_common.sh@10 -- # set +x 00:27:30.480 Some configs were skipped because the RPC state that can call them passed over. 00:27:30.480 21:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.480 21:23:53 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:27:30.480 21:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.480 21:23:53 -- common/autotest_common.sh@10 -- # set +x 00:27:30.480 21:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.480 21:23:53 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:27:30.480 21:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.480 21:23:53 -- common/autotest_common.sh@10 -- # set +x 00:27:30.480 21:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.480 21:23:53 -- bdev/blockdev.sh@619 -- # bdev='[ 00:27:30.480 { 00:27:30.480 "name": "Nvme0n1p1", 00:27:30.480 "aliases": [ 00:27:30.480 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:27:30.480 ], 00:27:30.480 "product_name": "GPT Disk", 00:27:30.480 "block_size": 4096, 00:27:30.480 "num_blocks": 655104, 00:27:30.480 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:27:30.480 "assigned_rate_limits": { 00:27:30.480 "rw_ios_per_sec": 0, 00:27:30.480 "rw_mbytes_per_sec": 0, 00:27:30.480 "r_mbytes_per_sec": 0, 00:27:30.480 "w_mbytes_per_sec": 0 00:27:30.480 }, 00:27:30.480 "claimed": false, 00:27:30.480 "zoned": false, 00:27:30.480 "supported_io_types": { 00:27:30.480 "read": true, 00:27:30.480 "write": true, 00:27:30.480 "unmap": true, 00:27:30.480 "write_zeroes": true, 00:27:30.480 "flush": true, 00:27:30.480 "reset": true, 00:27:30.480 "compare": true, 00:27:30.480 "compare_and_write": false, 00:27:30.480 "abort": true, 00:27:30.480 "nvme_admin": false, 00:27:30.480 "nvme_io": false 00:27:30.480 }, 00:27:30.480 "driver_specific": { 00:27:30.480 "gpt": { 00:27:30.480 "base_bdev": "Nvme0n1", 00:27:30.480 "offset_blocks": 256, 00:27:30.480 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:27:30.480 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:27:30.480 "partition_name": "SPDK_TEST_first" 00:27:30.480 } 00:27:30.480 } 00:27:30.480 } 00:27:30.480 ]' 00:27:30.480 21:23:53 -- bdev/blockdev.sh@620 -- # jq -r length 00:27:30.480 21:23:53 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:27:30.480 21:23:53 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:27:30.480 21:23:53 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:27:30.480 21:23:53 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:27:30.739 21:23:53 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:27:30.739 21:23:53 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:27:30.739 21:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.739 21:23:53 -- common/autotest_common.sh@10 -- # set +x 00:27:30.739 21:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.739 21:23:53 -- bdev/blockdev.sh@624 -- # bdev='[ 00:27:30.739 { 00:27:30.739 "name": "Nvme0n1p2", 00:27:30.739 "aliases": [ 00:27:30.739 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:27:30.739 ], 00:27:30.739 "product_name": "GPT Disk", 00:27:30.739 "block_size": 4096, 00:27:30.739 "num_blocks": 655103, 00:27:30.739 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:27:30.739 "assigned_rate_limits": { 00:27:30.739 "rw_ios_per_sec": 0, 00:27:30.739 "rw_mbytes_per_sec": 0, 00:27:30.739 "r_mbytes_per_sec": 0, 00:27:30.739 "w_mbytes_per_sec": 0 00:27:30.739 }, 00:27:30.739 "claimed": false, 00:27:30.739 "zoned": false, 00:27:30.739 "supported_io_types": { 00:27:30.739 "read": true, 00:27:30.739 "write": true, 00:27:30.739 "unmap": true, 00:27:30.739 "write_zeroes": true, 00:27:30.739 "flush": true, 00:27:30.739 "reset": true, 00:27:30.739 "compare": true, 00:27:30.739 "compare_and_write": false, 00:27:30.739 "abort": true, 00:27:30.739 "nvme_admin": false, 00:27:30.739 "nvme_io": false 00:27:30.739 }, 00:27:30.739 "driver_specific": { 00:27:30.739 "gpt": { 00:27:30.739 "base_bdev": "Nvme0n1", 00:27:30.739 "offset_blocks": 655360, 00:27:30.739 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:27:30.739 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:27:30.739 "partition_name": "SPDK_TEST_second" 00:27:30.739 } 00:27:30.739 } 00:27:30.739 } 00:27:30.739 ]' 00:27:30.739 21:23:53 -- bdev/blockdev.sh@625 -- # jq -r length 00:27:30.739 21:23:53 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:27:30.739 21:23:53 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:27:30.739 21:23:53 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:27:30.739 21:23:53 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:27:30.739 21:23:53 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:27:30.739 21:23:53 -- bdev/blockdev.sh@629 -- # killprocess 152028 00:27:30.739 21:23:53 -- common/autotest_common.sh@926 -- # '[' -z 152028 ']' 00:27:30.739 21:23:53 -- common/autotest_common.sh@930 -- # kill -0 152028 00:27:30.739 21:23:53 -- common/autotest_common.sh@931 -- # uname 00:27:30.739 21:23:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:30.739 21:23:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 152028 00:27:30.739 21:23:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:30.739 killing process with pid 152028 00:27:30.739 21:23:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:30.739 21:23:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 152028' 00:27:30.739 21:23:53 -- common/autotest_common.sh@945 -- # kill 152028 00:27:30.739 21:23:53 -- common/autotest_common.sh@950 -- # wait 152028 00:27:31.305 ************************************ 00:27:31.305 END TEST bdev_gpt_uuid 00:27:31.305 ************************************ 00:27:31.305 00:27:31.305 real 0m1.924s 00:27:31.305 user 0m2.259s 00:27:31.305 sys 0m0.377s 00:27:31.305 21:23:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.305 21:23:53 -- common/autotest_common.sh@10 -- # set +x 00:27:31.305 21:23:53 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:27:31.305 21:23:53 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:27:31.305 21:23:53 -- bdev/blockdev.sh@809 -- # cleanup 00:27:31.305 21:23:53 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:27:31.305 21:23:53 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:31.305 21:23:53 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:27:31.305 21:23:53 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:27:31.305 21:23:53 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:27:31.305 21:23:53 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:31.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:31.563 Waiting for block devices as requested 00:27:31.563 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:31.821 21:23:54 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:27:31.821 21:23:54 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:27:31.821 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:27:31.821 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:27:31.821 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:27:31.821 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:27:31.821 21:23:54 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:27:31.821 00:27:31.821 real 0m36.003s 00:27:31.821 user 0m54.625s 00:27:31.821 sys 0m5.472s 00:27:31.821 21:23:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.821 21:23:54 -- common/autotest_common.sh@10 -- # set +x 00:27:31.821 ************************************ 00:27:31.821 END TEST blockdev_nvme_gpt 00:27:31.821 ************************************ 00:27:31.821 21:23:54 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:27:31.822 21:23:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:31.822 21:23:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:31.822 21:23:54 -- common/autotest_common.sh@10 -- # set +x 00:27:31.822 ************************************ 00:27:31.822 START TEST nvme 00:27:31.822 ************************************ 00:27:31.822 21:23:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:27:31.822 * Looking for test storage... 00:27:31.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:31.822 21:23:54 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:32.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:32.388 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:27:33.326 21:23:55 -- nvme/nvme.sh@79 -- # uname 00:27:33.326 21:23:55 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:27:33.326 21:23:55 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:27:33.326 21:23:55 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:27:33.326 21:23:55 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:27:33.326 21:23:55 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:27:33.326 21:23:55 -- common/autotest_common.sh@1045 -- # echo 0 00:27:33.326 21:23:55 -- common/autotest_common.sh@1047 -- # stubpid=152450 00:27:33.326 Waiting for stub to ready for secondary processes... 00:27:33.326 21:23:55 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:27:33.326 21:23:55 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:27:33.326 21:23:55 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:27:33.326 21:23:55 -- common/autotest_common.sh@1051 -- # [[ -e /proc/152450 ]] 00:27:33.326 21:23:55 -- common/autotest_common.sh@1052 -- # sleep 1s 00:27:33.589 [2024-06-07 21:23:56.017933] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:33.589 [2024-06-07 21:23:56.018186] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.529 21:23:56 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:27:34.529 21:23:56 -- common/autotest_common.sh@1051 -- # [[ -e /proc/152450 ]] 00:27:34.529 21:23:56 -- common/autotest_common.sh@1052 -- # sleep 1s 00:27:34.787 [2024-06-07 21:23:57.253956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:34.787 [2024-06-07 21:23:57.322117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.787 [2024-06-07 21:23:57.322849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.787 [2024-06-07 21:23:57.322927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.787 [2024-06-07 21:23:57.332411] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:27:34.787 [2024-06-07 21:23:57.340269] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:27:34.787 [2024-06-07 21:23:57.340803] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:27:35.354 21:23:57 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:27:35.354 21:23:57 -- common/autotest_common.sh@1054 -- # echo done. 00:27:35.354 done. 00:27:35.354 21:23:57 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:27:35.354 21:23:57 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:27:35.354 21:23:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:35.354 21:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:35.354 ************************************ 00:27:35.354 START TEST nvme_reset 00:27:35.354 ************************************ 00:27:35.354 21:23:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:27:35.612 Initializing NVMe Controllers 00:27:35.612 Skipping QEMU NVMe SSD at 0000:00:06.0 00:27:35.612 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:27:35.612 00:27:35.612 real 0m0.286s 00:27:35.612 user 0m0.092s 00:27:35.612 sys 0m0.127s 00:27:35.612 21:23:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:35.612 21:23:58 -- common/autotest_common.sh@10 -- # set +x 00:27:35.612 ************************************ 00:27:35.612 END TEST nvme_reset 00:27:35.612 ************************************ 00:27:35.870 21:23:58 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:27:35.870 21:23:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:35.870 21:23:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:35.870 21:23:58 -- common/autotest_common.sh@10 -- # set +x 00:27:35.870 ************************************ 00:27:35.870 START TEST nvme_identify 00:27:35.870 ************************************ 00:27:35.870 21:23:58 -- common/autotest_common.sh@1104 -- # nvme_identify 00:27:35.870 21:23:58 -- nvme/nvme.sh@12 -- # bdfs=() 00:27:35.870 21:23:58 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:27:35.870 21:23:58 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:27:35.870 21:23:58 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:27:35.870 21:23:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:35.870 21:23:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:35.870 21:23:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:35.870 21:23:58 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:35.870 21:23:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:35.870 21:23:58 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:35.870 21:23:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:27:35.870 21:23:58 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:27:36.129 [2024-06-07 21:23:58.651176] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 152488 terminated unexpected 00:27:36.129 ===================================================== 00:27:36.129 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:36.129 ===================================================== 00:27:36.129 Controller Capabilities/Features 00:27:36.129 ================================ 00:27:36.129 Vendor ID: 1b36 00:27:36.129 Subsystem Vendor ID: 1af4 00:27:36.129 Serial Number: 12340 00:27:36.129 Model Number: QEMU NVMe Ctrl 00:27:36.129 Firmware Version: 8.0.0 00:27:36.129 Recommended Arb Burst: 6 00:27:36.129 IEEE OUI Identifier: 00 54 52 00:27:36.129 Multi-path I/O 00:27:36.129 May have multiple subsystem ports: No 00:27:36.129 May have multiple controllers: No 00:27:36.129 Associated with SR-IOV VF: No 00:27:36.129 Max Data Transfer Size: 524288 00:27:36.129 Max Number of Namespaces: 256 00:27:36.129 Max Number of I/O Queues: 64 00:27:36.129 NVMe Specification Version (VS): 1.4 00:27:36.129 NVMe Specification Version (Identify): 1.4 00:27:36.129 Maximum Queue Entries: 2048 00:27:36.129 Contiguous Queues Required: Yes 00:27:36.129 Arbitration Mechanisms Supported 00:27:36.129 Weighted Round Robin: Not Supported 00:27:36.129 Vendor Specific: Not Supported 00:27:36.129 Reset Timeout: 7500 ms 00:27:36.129 Doorbell Stride: 4 bytes 00:27:36.129 NVM Subsystem Reset: Not Supported 00:27:36.129 Command Sets Supported 00:27:36.129 NVM Command Set: Supported 00:27:36.129 Boot Partition: Not Supported 00:27:36.129 Memory Page Size Minimum: 4096 bytes 00:27:36.129 Memory Page Size Maximum: 65536 bytes 00:27:36.129 Persistent Memory Region: Not Supported 00:27:36.129 Optional Asynchronous Events Supported 00:27:36.129 Namespace Attribute Notices: Supported 00:27:36.129 Firmware Activation Notices: Not Supported 00:27:36.129 ANA Change Notices: Not Supported 00:27:36.129 PLE Aggregate Log Change Notices: Not Supported 00:27:36.129 LBA Status Info Alert Notices: Not Supported 00:27:36.129 EGE Aggregate Log Change Notices: Not Supported 00:27:36.129 Normal NVM Subsystem Shutdown event: Not Supported 00:27:36.129 Zone Descriptor Change Notices: Not Supported 00:27:36.129 Discovery Log Change Notices: Not Supported 00:27:36.129 Controller Attributes 00:27:36.129 128-bit Host Identifier: Not Supported 00:27:36.129 Non-Operational Permissive Mode: Not Supported 00:27:36.129 NVM Sets: Not Supported 00:27:36.129 Read Recovery Levels: Not Supported 00:27:36.129 Endurance Groups: Not Supported 00:27:36.129 Predictable Latency Mode: Not Supported 00:27:36.129 Traffic Based Keep ALive: Not Supported 00:27:36.129 Namespace Granularity: Not Supported 00:27:36.129 SQ Associations: Not Supported 00:27:36.129 UUID List: Not Supported 00:27:36.129 Multi-Domain Subsystem: Not Supported 00:27:36.129 Fixed Capacity Management: Not Supported 00:27:36.129 Variable Capacity Management: Not Supported 00:27:36.129 Delete Endurance Group: Not Supported 00:27:36.129 Delete NVM Set: Not Supported 00:27:36.129 Extended LBA Formats Supported: Supported 00:27:36.129 Flexible Data Placement Supported: Not Supported 00:27:36.129 00:27:36.129 Controller Memory Buffer Support 00:27:36.129 ================================ 00:27:36.129 Supported: No 00:27:36.129 00:27:36.129 Persistent Memory Region Support 00:27:36.129 ================================ 00:27:36.129 Supported: No 00:27:36.129 00:27:36.129 Admin Command Set Attributes 00:27:36.129 ============================ 00:27:36.129 Security Send/Receive: Not Supported 00:27:36.129 Format NVM: Supported 00:27:36.129 Firmware Activate/Download: Not Supported 00:27:36.129 Namespace Management: Supported 00:27:36.129 Device Self-Test: Not Supported 00:27:36.129 Directives: Supported 00:27:36.129 NVMe-MI: Not Supported 00:27:36.129 Virtualization Management: Not Supported 00:27:36.129 Doorbell Buffer Config: Supported 00:27:36.129 Get LBA Status Capability: Not Supported 00:27:36.129 Command & Feature Lockdown Capability: Not Supported 00:27:36.129 Abort Command Limit: 4 00:27:36.129 Async Event Request Limit: 4 00:27:36.129 Number of Firmware Slots: N/A 00:27:36.129 Firmware Slot 1 Read-Only: N/A 00:27:36.129 Firmware Activation Without Reset: N/A 00:27:36.129 Multiple Update Detection Support: N/A 00:27:36.129 Firmware Update Granularity: No Information Provided 00:27:36.129 Per-Namespace SMART Log: Yes 00:27:36.129 Asymmetric Namespace Access Log Page: Not Supported 00:27:36.129 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:27:36.130 Command Effects Log Page: Supported 00:27:36.130 Get Log Page Extended Data: Supported 00:27:36.130 Telemetry Log Pages: Not Supported 00:27:36.130 Persistent Event Log Pages: Not Supported 00:27:36.130 Supported Log Pages Log Page: May Support 00:27:36.130 Commands Supported & Effects Log Page: Not Supported 00:27:36.130 Feature Identifiers & Effects Log Page:May Support 00:27:36.130 NVMe-MI Commands & Effects Log Page: May Support 00:27:36.130 Data Area 4 for Telemetry Log: Not Supported 00:27:36.130 Error Log Page Entries Supported: 1 00:27:36.130 Keep Alive: Not Supported 00:27:36.130 00:27:36.130 NVM Command Set Attributes 00:27:36.130 ========================== 00:27:36.130 Submission Queue Entry Size 00:27:36.130 Max: 64 00:27:36.130 Min: 64 00:27:36.130 Completion Queue Entry Size 00:27:36.130 Max: 16 00:27:36.130 Min: 16 00:27:36.130 Number of Namespaces: 256 00:27:36.130 Compare Command: Supported 00:27:36.130 Write Uncorrectable Command: Not Supported 00:27:36.130 Dataset Management Command: Supported 00:27:36.130 Write Zeroes Command: Supported 00:27:36.130 Set Features Save Field: Supported 00:27:36.130 Reservations: Not Supported 00:27:36.130 Timestamp: Supported 00:27:36.130 Copy: Supported 00:27:36.130 Volatile Write Cache: Present 00:27:36.130 Atomic Write Unit (Normal): 1 00:27:36.130 Atomic Write Unit (PFail): 1 00:27:36.130 Atomic Compare & Write Unit: 1 00:27:36.130 Fused Compare & Write: Not Supported 00:27:36.130 Scatter-Gather List 00:27:36.130 SGL Command Set: Supported 00:27:36.130 SGL Keyed: Not Supported 00:27:36.130 SGL Bit Bucket Descriptor: Not Supported 00:27:36.130 SGL Metadata Pointer: Not Supported 00:27:36.130 Oversized SGL: Not Supported 00:27:36.130 SGL Metadata Address: Not Supported 00:27:36.130 SGL Offset: Not Supported 00:27:36.130 Transport SGL Data Block: Not Supported 00:27:36.130 Replay Protected Memory Block: Not Supported 00:27:36.130 00:27:36.130 Firmware Slot Information 00:27:36.130 ========================= 00:27:36.130 Active slot: 1 00:27:36.130 Slot 1 Firmware Revision: 1.0 00:27:36.130 00:27:36.130 00:27:36.130 Commands Supported and Effects 00:27:36.130 ============================== 00:27:36.130 Admin Commands 00:27:36.130 -------------- 00:27:36.130 Delete I/O Submission Queue (00h): Supported 00:27:36.130 Create I/O Submission Queue (01h): Supported 00:27:36.130 Get Log Page (02h): Supported 00:27:36.130 Delete I/O Completion Queue (04h): Supported 00:27:36.130 Create I/O Completion Queue (05h): Supported 00:27:36.130 Identify (06h): Supported 00:27:36.130 Abort (08h): Supported 00:27:36.130 Set Features (09h): Supported 00:27:36.130 Get Features (0Ah): Supported 00:27:36.130 Asynchronous Event Request (0Ch): Supported 00:27:36.130 Namespace Attachment (15h): Supported NS-Inventory-Change 00:27:36.130 Directive Send (19h): Supported 00:27:36.130 Directive Receive (1Ah): Supported 00:27:36.130 Virtualization Management (1Ch): Supported 00:27:36.130 Doorbell Buffer Config (7Ch): Supported 00:27:36.130 Format NVM (80h): Supported LBA-Change 00:27:36.130 I/O Commands 00:27:36.130 ------------ 00:27:36.130 Flush (00h): Supported LBA-Change 00:27:36.130 Write (01h): Supported LBA-Change 00:27:36.130 Read (02h): Supported 00:27:36.130 Compare (05h): Supported 00:27:36.130 Write Zeroes (08h): Supported LBA-Change 00:27:36.130 Dataset Management (09h): Supported LBA-Change 00:27:36.130 Unknown (0Ch): Supported 00:27:36.130 Unknown (12h): Supported 00:27:36.130 Copy (19h): Supported LBA-Change 00:27:36.130 Unknown (1Dh): Supported LBA-Change 00:27:36.130 00:27:36.130 Error Log 00:27:36.130 ========= 00:27:36.130 00:27:36.130 Arbitration 00:27:36.130 =========== 00:27:36.130 Arbitration Burst: no limit 00:27:36.130 00:27:36.130 Power Management 00:27:36.130 ================ 00:27:36.130 Number of Power States: 1 00:27:36.130 Current Power State: Power State #0 00:27:36.130 Power State #0: 00:27:36.130 Max Power: 25.00 W 00:27:36.130 Non-Operational State: Operational 00:27:36.130 Entry Latency: 16 microseconds 00:27:36.130 Exit Latency: 4 microseconds 00:27:36.130 Relative Read Throughput: 0 00:27:36.130 Relative Read Latency: 0 00:27:36.130 Relative Write Throughput: 0 00:27:36.130 Relative Write Latency: 0 00:27:36.130 Idle Power: Not Reported 00:27:36.130 Active Power: Not Reported 00:27:36.130 Non-Operational Permissive Mode: Not Supported 00:27:36.130 00:27:36.130 Health Information 00:27:36.130 ================== 00:27:36.130 Critical Warnings: 00:27:36.130 Available Spare Space: OK 00:27:36.130 Temperature: OK 00:27:36.130 Device Reliability: OK 00:27:36.130 Read Only: No 00:27:36.130 Volatile Memory Backup: OK 00:27:36.130 Current Temperature: 323 Kelvin (50 Celsius) 00:27:36.130 Temperature Threshold: 343 Kelvin (70 Celsius) 00:27:36.130 Available Spare: 0% 00:27:36.130 Available Spare Threshold: 0% 00:27:36.130 Life Percentage Used: 0% 00:27:36.130 Data Units Read: 7693 00:27:36.130 Data Units Written: 3735 00:27:36.130 Host Read Commands: 382996 00:27:36.130 Host Write Commands: 206515 00:27:36.130 Controller Busy Time: 0 minutes 00:27:36.130 Power Cycles: 0 00:27:36.130 Power On Hours: 0 hours 00:27:36.130 Unsafe Shutdowns: 0 00:27:36.130 Unrecoverable Media Errors: 0 00:27:36.130 Lifetime Error Log Entries: 0 00:27:36.130 Warning Temperature Time: 0 minutes 00:27:36.130 Critical Temperature Time: 0 minutes 00:27:36.130 00:27:36.130 Number of Queues 00:27:36.130 ================ 00:27:36.130 Number of I/O Submission Queues: 64 00:27:36.130 Number of I/O Completion Queues: 64 00:27:36.130 00:27:36.130 ZNS Specific Controller Data 00:27:36.130 ============================ 00:27:36.130 Zone Append Size Limit: 0 00:27:36.130 00:27:36.130 00:27:36.130 Active Namespaces 00:27:36.130 ================= 00:27:36.130 Namespace ID:1 00:27:36.130 Error Recovery Timeout: Unlimited 00:27:36.130 Command Set Identifier: NVM (00h) 00:27:36.130 Deallocate: Supported 00:27:36.130 Deallocated/Unwritten Error: Supported 00:27:36.130 Deallocated Read Value: All 0x00 00:27:36.130 Deallocate in Write Zeroes: Not Supported 00:27:36.130 Deallocated Guard Field: 0xFFFF 00:27:36.130 Flush: Supported 00:27:36.130 Reservation: Not Supported 00:27:36.130 Namespace Sharing Capabilities: Private 00:27:36.130 Size (in LBAs): 1310720 (5GiB) 00:27:36.130 Capacity (in LBAs): 1310720 (5GiB) 00:27:36.130 Utilization (in LBAs): 1310720 (5GiB) 00:27:36.130 Thin Provisioning: Not Supported 00:27:36.130 Per-NS Atomic Units: No 00:27:36.130 Maximum Single Source Range Length: 128 00:27:36.130 Maximum Copy Length: 128 00:27:36.130 Maximum Source Range Count: 128 00:27:36.130 NGUID/EUI64 Never Reused: No 00:27:36.130 Namespace Write Protected: No 00:27:36.130 Number of LBA Formats: 8 00:27:36.130 Current LBA Format: LBA Format #04 00:27:36.130 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:36.130 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:36.130 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:36.130 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:36.130 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:36.130 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:36.130 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:36.130 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:36.130 00:27:36.130 21:23:58 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:27:36.130 21:23:58 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:27:36.389 ===================================================== 00:27:36.389 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:36.389 ===================================================== 00:27:36.389 Controller Capabilities/Features 00:27:36.389 ================================ 00:27:36.389 Vendor ID: 1b36 00:27:36.389 Subsystem Vendor ID: 1af4 00:27:36.389 Serial Number: 12340 00:27:36.389 Model Number: QEMU NVMe Ctrl 00:27:36.389 Firmware Version: 8.0.0 00:27:36.389 Recommended Arb Burst: 6 00:27:36.389 IEEE OUI Identifier: 00 54 52 00:27:36.389 Multi-path I/O 00:27:36.389 May have multiple subsystem ports: No 00:27:36.389 May have multiple controllers: No 00:27:36.389 Associated with SR-IOV VF: No 00:27:36.389 Max Data Transfer Size: 524288 00:27:36.389 Max Number of Namespaces: 256 00:27:36.389 Max Number of I/O Queues: 64 00:27:36.389 NVMe Specification Version (VS): 1.4 00:27:36.389 NVMe Specification Version (Identify): 1.4 00:27:36.389 Maximum Queue Entries: 2048 00:27:36.389 Contiguous Queues Required: Yes 00:27:36.389 Arbitration Mechanisms Supported 00:27:36.389 Weighted Round Robin: Not Supported 00:27:36.389 Vendor Specific: Not Supported 00:27:36.389 Reset Timeout: 7500 ms 00:27:36.389 Doorbell Stride: 4 bytes 00:27:36.389 NVM Subsystem Reset: Not Supported 00:27:36.389 Command Sets Supported 00:27:36.389 NVM Command Set: Supported 00:27:36.389 Boot Partition: Not Supported 00:27:36.389 Memory Page Size Minimum: 4096 bytes 00:27:36.389 Memory Page Size Maximum: 65536 bytes 00:27:36.389 Persistent Memory Region: Not Supported 00:27:36.389 Optional Asynchronous Events Supported 00:27:36.389 Namespace Attribute Notices: Supported 00:27:36.389 Firmware Activation Notices: Not Supported 00:27:36.389 ANA Change Notices: Not Supported 00:27:36.389 PLE Aggregate Log Change Notices: Not Supported 00:27:36.389 LBA Status Info Alert Notices: Not Supported 00:27:36.389 EGE Aggregate Log Change Notices: Not Supported 00:27:36.389 Normal NVM Subsystem Shutdown event: Not Supported 00:27:36.389 Zone Descriptor Change Notices: Not Supported 00:27:36.389 Discovery Log Change Notices: Not Supported 00:27:36.389 Controller Attributes 00:27:36.389 128-bit Host Identifier: Not Supported 00:27:36.389 Non-Operational Permissive Mode: Not Supported 00:27:36.389 NVM Sets: Not Supported 00:27:36.389 Read Recovery Levels: Not Supported 00:27:36.389 Endurance Groups: Not Supported 00:27:36.389 Predictable Latency Mode: Not Supported 00:27:36.389 Traffic Based Keep ALive: Not Supported 00:27:36.389 Namespace Granularity: Not Supported 00:27:36.389 SQ Associations: Not Supported 00:27:36.389 UUID List: Not Supported 00:27:36.389 Multi-Domain Subsystem: Not Supported 00:27:36.389 Fixed Capacity Management: Not Supported 00:27:36.389 Variable Capacity Management: Not Supported 00:27:36.389 Delete Endurance Group: Not Supported 00:27:36.389 Delete NVM Set: Not Supported 00:27:36.389 Extended LBA Formats Supported: Supported 00:27:36.389 Flexible Data Placement Supported: Not Supported 00:27:36.389 00:27:36.389 Controller Memory Buffer Support 00:27:36.389 ================================ 00:27:36.389 Supported: No 00:27:36.390 00:27:36.390 Persistent Memory Region Support 00:27:36.390 ================================ 00:27:36.390 Supported: No 00:27:36.390 00:27:36.390 Admin Command Set Attributes 00:27:36.390 ============================ 00:27:36.390 Security Send/Receive: Not Supported 00:27:36.390 Format NVM: Supported 00:27:36.390 Firmware Activate/Download: Not Supported 00:27:36.390 Namespace Management: Supported 00:27:36.390 Device Self-Test: Not Supported 00:27:36.390 Directives: Supported 00:27:36.390 NVMe-MI: Not Supported 00:27:36.390 Virtualization Management: Not Supported 00:27:36.390 Doorbell Buffer Config: Supported 00:27:36.390 Get LBA Status Capability: Not Supported 00:27:36.390 Command & Feature Lockdown Capability: Not Supported 00:27:36.390 Abort Command Limit: 4 00:27:36.390 Async Event Request Limit: 4 00:27:36.390 Number of Firmware Slots: N/A 00:27:36.390 Firmware Slot 1 Read-Only: N/A 00:27:36.390 Firmware Activation Without Reset: N/A 00:27:36.390 Multiple Update Detection Support: N/A 00:27:36.390 Firmware Update Granularity: No Information Provided 00:27:36.390 Per-Namespace SMART Log: Yes 00:27:36.390 Asymmetric Namespace Access Log Page: Not Supported 00:27:36.390 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:27:36.390 Command Effects Log Page: Supported 00:27:36.390 Get Log Page Extended Data: Supported 00:27:36.390 Telemetry Log Pages: Not Supported 00:27:36.390 Persistent Event Log Pages: Not Supported 00:27:36.390 Supported Log Pages Log Page: May Support 00:27:36.390 Commands Supported & Effects Log Page: Not Supported 00:27:36.390 Feature Identifiers & Effects Log Page:May Support 00:27:36.390 NVMe-MI Commands & Effects Log Page: May Support 00:27:36.390 Data Area 4 for Telemetry Log: Not Supported 00:27:36.390 Error Log Page Entries Supported: 1 00:27:36.390 Keep Alive: Not Supported 00:27:36.390 00:27:36.390 NVM Command Set Attributes 00:27:36.390 ========================== 00:27:36.390 Submission Queue Entry Size 00:27:36.390 Max: 64 00:27:36.390 Min: 64 00:27:36.390 Completion Queue Entry Size 00:27:36.390 Max: 16 00:27:36.390 Min: 16 00:27:36.390 Number of Namespaces: 256 00:27:36.390 Compare Command: Supported 00:27:36.390 Write Uncorrectable Command: Not Supported 00:27:36.390 Dataset Management Command: Supported 00:27:36.390 Write Zeroes Command: Supported 00:27:36.390 Set Features Save Field: Supported 00:27:36.390 Reservations: Not Supported 00:27:36.390 Timestamp: Supported 00:27:36.390 Copy: Supported 00:27:36.390 Volatile Write Cache: Present 00:27:36.390 Atomic Write Unit (Normal): 1 00:27:36.390 Atomic Write Unit (PFail): 1 00:27:36.390 Atomic Compare & Write Unit: 1 00:27:36.390 Fused Compare & Write: Not Supported 00:27:36.390 Scatter-Gather List 00:27:36.390 SGL Command Set: Supported 00:27:36.390 SGL Keyed: Not Supported 00:27:36.390 SGL Bit Bucket Descriptor: Not Supported 00:27:36.390 SGL Metadata Pointer: Not Supported 00:27:36.390 Oversized SGL: Not Supported 00:27:36.390 SGL Metadata Address: Not Supported 00:27:36.390 SGL Offset: Not Supported 00:27:36.390 Transport SGL Data Block: Not Supported 00:27:36.390 Replay Protected Memory Block: Not Supported 00:27:36.390 00:27:36.390 Firmware Slot Information 00:27:36.390 ========================= 00:27:36.390 Active slot: 1 00:27:36.390 Slot 1 Firmware Revision: 1.0 00:27:36.390 00:27:36.390 00:27:36.390 Commands Supported and Effects 00:27:36.390 ============================== 00:27:36.390 Admin Commands 00:27:36.390 -------------- 00:27:36.390 Delete I/O Submission Queue (00h): Supported 00:27:36.390 Create I/O Submission Queue (01h): Supported 00:27:36.390 Get Log Page (02h): Supported 00:27:36.390 Delete I/O Completion Queue (04h): Supported 00:27:36.390 Create I/O Completion Queue (05h): Supported 00:27:36.390 Identify (06h): Supported 00:27:36.390 Abort (08h): Supported 00:27:36.390 Set Features (09h): Supported 00:27:36.390 Get Features (0Ah): Supported 00:27:36.390 Asynchronous Event Request (0Ch): Supported 00:27:36.390 Namespace Attachment (15h): Supported NS-Inventory-Change 00:27:36.390 Directive Send (19h): Supported 00:27:36.390 Directive Receive (1Ah): Supported 00:27:36.390 Virtualization Management (1Ch): Supported 00:27:36.390 Doorbell Buffer Config (7Ch): Supported 00:27:36.390 Format NVM (80h): Supported LBA-Change 00:27:36.390 I/O Commands 00:27:36.390 ------------ 00:27:36.390 Flush (00h): Supported LBA-Change 00:27:36.390 Write (01h): Supported LBA-Change 00:27:36.390 Read (02h): Supported 00:27:36.390 Compare (05h): Supported 00:27:36.390 Write Zeroes (08h): Supported LBA-Change 00:27:36.390 Dataset Management (09h): Supported LBA-Change 00:27:36.390 Unknown (0Ch): Supported 00:27:36.390 Unknown (12h): Supported 00:27:36.390 Copy (19h): Supported LBA-Change 00:27:36.390 Unknown (1Dh): Supported LBA-Change 00:27:36.390 00:27:36.390 Error Log 00:27:36.390 ========= 00:27:36.390 00:27:36.390 Arbitration 00:27:36.390 =========== 00:27:36.390 Arbitration Burst: no limit 00:27:36.390 00:27:36.390 Power Management 00:27:36.390 ================ 00:27:36.390 Number of Power States: 1 00:27:36.390 Current Power State: Power State #0 00:27:36.390 Power State #0: 00:27:36.390 Max Power: 25.00 W 00:27:36.390 Non-Operational State: Operational 00:27:36.390 Entry Latency: 16 microseconds 00:27:36.390 Exit Latency: 4 microseconds 00:27:36.390 Relative Read Throughput: 0 00:27:36.390 Relative Read Latency: 0 00:27:36.390 Relative Write Throughput: 0 00:27:36.390 Relative Write Latency: 0 00:27:36.390 Idle Power: Not Reported 00:27:36.390 Active Power: Not Reported 00:27:36.390 Non-Operational Permissive Mode: Not Supported 00:27:36.390 00:27:36.390 Health Information 00:27:36.390 ================== 00:27:36.390 Critical Warnings: 00:27:36.390 Available Spare Space: OK 00:27:36.390 Temperature: OK 00:27:36.390 Device Reliability: OK 00:27:36.390 Read Only: No 00:27:36.390 Volatile Memory Backup: OK 00:27:36.390 Current Temperature: 323 Kelvin (50 Celsius) 00:27:36.390 Temperature Threshold: 343 Kelvin (70 Celsius) 00:27:36.390 Available Spare: 0% 00:27:36.390 Available Spare Threshold: 0% 00:27:36.390 Life Percentage Used: 0% 00:27:36.390 Data Units Read: 7693 00:27:36.390 Data Units Written: 3735 00:27:36.390 Host Read Commands: 382996 00:27:36.390 Host Write Commands: 206515 00:27:36.390 Controller Busy Time: 0 minutes 00:27:36.390 Power Cycles: 0 00:27:36.390 Power On Hours: 0 hours 00:27:36.390 Unsafe Shutdowns: 0 00:27:36.390 Unrecoverable Media Errors: 0 00:27:36.390 Lifetime Error Log Entries: 0 00:27:36.390 Warning Temperature Time: 0 minutes 00:27:36.390 Critical Temperature Time: 0 minutes 00:27:36.390 00:27:36.390 Number of Queues 00:27:36.390 ================ 00:27:36.390 Number of I/O Submission Queues: 64 00:27:36.390 Number of I/O Completion Queues: 64 00:27:36.390 00:27:36.390 ZNS Specific Controller Data 00:27:36.390 ============================ 00:27:36.390 Zone Append Size Limit: 0 00:27:36.390 00:27:36.390 00:27:36.390 Active Namespaces 00:27:36.390 ================= 00:27:36.390 Namespace ID:1 00:27:36.390 Error Recovery Timeout: Unlimited 00:27:36.390 Command Set Identifier: NVM (00h) 00:27:36.390 Deallocate: Supported 00:27:36.390 Deallocated/Unwritten Error: Supported 00:27:36.390 Deallocated Read Value: All 0x00 00:27:36.390 Deallocate in Write Zeroes: Not Supported 00:27:36.390 Deallocated Guard Field: 0xFFFF 00:27:36.390 Flush: Supported 00:27:36.390 Reservation: Not Supported 00:27:36.390 Namespace Sharing Capabilities: Private 00:27:36.390 Size (in LBAs): 1310720 (5GiB) 00:27:36.390 Capacity (in LBAs): 1310720 (5GiB) 00:27:36.390 Utilization (in LBAs): 1310720 (5GiB) 00:27:36.390 Thin Provisioning: Not Supported 00:27:36.390 Per-NS Atomic Units: No 00:27:36.390 Maximum Single Source Range Length: 128 00:27:36.390 Maximum Copy Length: 128 00:27:36.390 Maximum Source Range Count: 128 00:27:36.390 NGUID/EUI64 Never Reused: No 00:27:36.390 Namespace Write Protected: No 00:27:36.390 Number of LBA Formats: 8 00:27:36.390 Current LBA Format: LBA Format #04 00:27:36.390 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:36.390 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:36.390 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:36.390 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:36.390 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:36.390 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:36.390 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:36.390 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:36.390 00:27:36.390 00:27:36.390 real 0m0.686s 00:27:36.390 user 0m0.269s 00:27:36.391 sys 0m0.290s 00:27:36.391 21:23:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.391 21:23:59 -- common/autotest_common.sh@10 -- # set +x 00:27:36.391 ************************************ 00:27:36.391 END TEST nvme_identify 00:27:36.391 ************************************ 00:27:36.391 21:23:59 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:27:36.391 21:23:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:36.391 21:23:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:36.391 21:23:59 -- common/autotest_common.sh@10 -- # set +x 00:27:36.649 ************************************ 00:27:36.649 START TEST nvme_perf 00:27:36.649 ************************************ 00:27:36.649 21:23:59 -- common/autotest_common.sh@1104 -- # nvme_perf 00:27:36.649 21:23:59 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:27:38.024 Initializing NVMe Controllers 00:27:38.024 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:38.024 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:27:38.024 Initialization complete. Launching workers. 00:27:38.024 ======================================================== 00:27:38.024 Latency(us) 00:27:38.024 Device Information : IOPS MiB/s Average min max 00:27:38.024 PCIE (0000:00:06.0) NSID 1 from core 0: 56576.00 663.00 2260.92 1188.66 6430.85 00:27:38.024 ======================================================== 00:27:38.024 Total : 56576.00 663.00 2260.92 1188.66 6430.85 00:27:38.024 00:27:38.024 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:27:38.024 ================================================================================= 00:27:38.024 1.00000% : 1347.956us 00:27:38.024 10.00000% : 1571.375us 00:27:38.024 25.00000% : 1824.582us 00:27:38.024 50.00000% : 2249.076us 00:27:38.024 75.00000% : 2666.124us 00:27:38.024 90.00000% : 2934.225us 00:27:38.024 95.00000% : 3112.960us 00:27:38.024 98.00000% : 3351.273us 00:27:38.024 99.00000% : 3485.324us 00:27:38.024 99.50000% : 3723.636us 00:27:38.024 99.90000% : 5421.615us 00:27:38.024 99.99000% : 6285.498us 00:27:38.024 99.99900% : 6434.444us 00:27:38.024 99.99990% : 6434.444us 00:27:38.024 99.99999% : 6434.444us 00:27:38.024 00:27:38.024 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:27:38.024 ============================================================================== 00:27:38.024 Range in us Cumulative IO count 00:27:38.024 1184.116 - 1191.564: 0.0035% ( 2) 00:27:38.024 1199.011 - 1206.458: 0.0053% ( 1) 00:27:38.024 1213.905 - 1221.353: 0.0106% ( 3) 00:27:38.024 1221.353 - 1228.800: 0.0124% ( 1) 00:27:38.024 1228.800 - 1236.247: 0.0212% ( 5) 00:27:38.024 1236.247 - 1243.695: 0.0300% ( 5) 00:27:38.024 1243.695 - 1251.142: 0.0407% ( 6) 00:27:38.024 1251.142 - 1258.589: 0.0742% ( 19) 00:27:38.024 1258.589 - 1266.036: 0.1025% ( 16) 00:27:38.024 1266.036 - 1273.484: 0.1449% ( 24) 00:27:38.025 1273.484 - 1280.931: 0.1909% ( 26) 00:27:38.025 1280.931 - 1288.378: 0.2333% ( 24) 00:27:38.025 1288.378 - 1295.825: 0.3022% ( 39) 00:27:38.025 1295.825 - 1303.273: 0.3659% ( 36) 00:27:38.025 1303.273 - 1310.720: 0.4666% ( 57) 00:27:38.025 1310.720 - 1318.167: 0.5515% ( 48) 00:27:38.025 1318.167 - 1325.615: 0.6593% ( 61) 00:27:38.025 1325.615 - 1333.062: 0.7812% ( 69) 00:27:38.025 1333.062 - 1340.509: 0.8926% ( 63) 00:27:38.025 1340.509 - 1347.956: 1.0199% ( 72) 00:27:38.025 1347.956 - 1355.404: 1.1471% ( 72) 00:27:38.025 1355.404 - 1362.851: 1.2956% ( 84) 00:27:38.025 1362.851 - 1370.298: 1.4741% ( 101) 00:27:38.025 1370.298 - 1377.745: 1.6562% ( 103) 00:27:38.025 1377.745 - 1385.193: 1.8506% ( 110) 00:27:38.025 1385.193 - 1392.640: 2.0539% ( 115) 00:27:38.025 1392.640 - 1400.087: 2.2854% ( 131) 00:27:38.025 1400.087 - 1407.535: 2.5205% ( 133) 00:27:38.025 1407.535 - 1414.982: 2.7697% ( 141) 00:27:38.025 1414.982 - 1422.429: 3.0207% ( 142) 00:27:38.025 1422.429 - 1429.876: 3.3212% ( 170) 00:27:38.025 1429.876 - 1437.324: 3.6075% ( 162) 00:27:38.025 1437.324 - 1444.771: 3.8815% ( 155) 00:27:38.025 1444.771 - 1452.218: 4.1926% ( 176) 00:27:38.025 1452.218 - 1459.665: 4.5107% ( 180) 00:27:38.025 1459.665 - 1467.113: 4.8218% ( 176) 00:27:38.025 1467.113 - 1474.560: 5.1559% ( 189) 00:27:38.025 1474.560 - 1482.007: 5.4811% ( 184) 00:27:38.025 1482.007 - 1489.455: 5.8541% ( 211) 00:27:38.025 1489.455 - 1496.902: 6.2412% ( 219) 00:27:38.025 1496.902 - 1504.349: 6.5823% ( 193) 00:27:38.025 1504.349 - 1511.796: 6.9800% ( 225) 00:27:38.025 1511.796 - 1519.244: 7.3688% ( 220) 00:27:38.025 1519.244 - 1526.691: 7.7330% ( 206) 00:27:38.025 1526.691 - 1534.138: 8.1713% ( 248) 00:27:38.025 1534.138 - 1541.585: 8.5036% ( 188) 00:27:38.025 1541.585 - 1549.033: 8.9543% ( 255) 00:27:38.025 1549.033 - 1556.480: 9.3591% ( 229) 00:27:38.025 1556.480 - 1563.927: 9.7727% ( 234) 00:27:38.025 1563.927 - 1571.375: 10.1934% ( 238) 00:27:38.025 1571.375 - 1578.822: 10.6123% ( 237) 00:27:38.025 1578.822 - 1586.269: 11.0347% ( 239) 00:27:38.025 1586.269 - 1593.716: 11.4713% ( 247) 00:27:38.025 1593.716 - 1601.164: 11.8655% ( 223) 00:27:38.025 1601.164 - 1608.611: 12.3285% ( 262) 00:27:38.025 1608.611 - 1616.058: 12.7351% ( 230) 00:27:38.025 1616.058 - 1623.505: 13.1770% ( 250) 00:27:38.025 1623.505 - 1630.953: 13.6082% ( 244) 00:27:38.025 1630.953 - 1638.400: 14.0271% ( 237) 00:27:38.025 1638.400 - 1645.847: 14.4938% ( 264) 00:27:38.025 1645.847 - 1653.295: 14.9003% ( 230) 00:27:38.025 1653.295 - 1660.742: 15.3599% ( 260) 00:27:38.025 1660.742 - 1668.189: 15.7947% ( 246) 00:27:38.025 1668.189 - 1675.636: 16.2154% ( 238) 00:27:38.025 1675.636 - 1683.084: 16.6431% ( 242) 00:27:38.025 1683.084 - 1690.531: 17.1097% ( 264) 00:27:38.025 1690.531 - 1697.978: 17.5233% ( 234) 00:27:38.025 1697.978 - 1705.425: 18.0023% ( 271) 00:27:38.025 1705.425 - 1712.873: 18.4124% ( 232) 00:27:38.025 1712.873 - 1720.320: 18.8737% ( 261) 00:27:38.025 1720.320 - 1727.767: 19.3209% ( 253) 00:27:38.025 1727.767 - 1735.215: 19.7610% ( 249) 00:27:38.025 1735.215 - 1742.662: 20.1888% ( 242) 00:27:38.025 1742.662 - 1750.109: 20.6430% ( 257) 00:27:38.025 1750.109 - 1757.556: 21.0884% ( 252) 00:27:38.025 1757.556 - 1765.004: 21.5604% ( 267) 00:27:38.025 1765.004 - 1772.451: 21.9634% ( 228) 00:27:38.025 1772.451 - 1779.898: 22.4247% ( 261) 00:27:38.025 1779.898 - 1787.345: 22.8807% ( 258) 00:27:38.025 1787.345 - 1794.793: 23.2802% ( 226) 00:27:38.025 1794.793 - 1802.240: 23.7610% ( 272) 00:27:38.025 1802.240 - 1809.687: 24.1834% ( 239) 00:27:38.025 1809.687 - 1817.135: 24.6182% ( 246) 00:27:38.025 1817.135 - 1824.582: 25.1043% ( 275) 00:27:38.025 1824.582 - 1832.029: 25.5055% ( 227) 00:27:38.025 1832.029 - 1839.476: 25.9792% ( 268) 00:27:38.025 1839.476 - 1846.924: 26.4105% ( 244) 00:27:38.025 1846.924 - 1854.371: 26.8506% ( 249) 00:27:38.025 1854.371 - 1861.818: 27.3208% ( 266) 00:27:38.025 1861.818 - 1869.265: 27.7326% ( 233) 00:27:38.025 1869.265 - 1876.713: 28.1727% ( 249) 00:27:38.025 1876.713 - 1884.160: 28.6429% ( 266) 00:27:38.025 1884.160 - 1891.607: 29.0370% ( 223) 00:27:38.025 1891.607 - 1899.055: 29.5125% ( 269) 00:27:38.025 1899.055 - 1906.502: 29.9297% ( 236) 00:27:38.025 1906.502 - 1921.396: 30.8311% ( 510) 00:27:38.025 1921.396 - 1936.291: 31.7184% ( 502) 00:27:38.025 1936.291 - 1951.185: 32.6039% ( 501) 00:27:38.025 1951.185 - 1966.080: 33.4948% ( 504) 00:27:38.025 1966.080 - 1980.975: 34.3679% ( 494) 00:27:38.025 1980.975 - 1995.869: 35.2711% ( 511) 00:27:38.025 1995.869 - 2010.764: 36.1655% ( 506) 00:27:38.025 2010.764 - 2025.658: 37.0617% ( 507) 00:27:38.025 2025.658 - 2040.553: 37.9401% ( 497) 00:27:38.025 2040.553 - 2055.447: 38.8575% ( 519) 00:27:38.025 2055.447 - 2070.342: 39.7289% ( 493) 00:27:38.025 2070.342 - 2085.236: 40.6197% ( 504) 00:27:38.025 2085.236 - 2100.131: 41.5194% ( 509) 00:27:38.025 2100.131 - 2115.025: 42.4190% ( 509) 00:27:38.025 2115.025 - 2129.920: 43.3152% ( 507) 00:27:38.025 2129.920 - 2144.815: 44.2149% ( 509) 00:27:38.025 2144.815 - 2159.709: 45.1092% ( 506) 00:27:38.025 2159.709 - 2174.604: 46.0018% ( 505) 00:27:38.025 2174.604 - 2189.498: 46.8874% ( 501) 00:27:38.025 2189.498 - 2204.393: 47.7764% ( 503) 00:27:38.025 2204.393 - 2219.287: 48.6814% ( 512) 00:27:38.025 2219.287 - 2234.182: 49.5740% ( 505) 00:27:38.025 2234.182 - 2249.076: 50.4702% ( 507) 00:27:38.025 2249.076 - 2263.971: 51.3592% ( 503) 00:27:38.025 2263.971 - 2278.865: 52.2677% ( 514) 00:27:38.025 2278.865 - 2293.760: 53.1674% ( 509) 00:27:38.025 2293.760 - 2308.655: 54.0459% ( 497) 00:27:38.025 2308.655 - 2323.549: 54.9155% ( 492) 00:27:38.025 2323.549 - 2338.444: 55.8063% ( 504) 00:27:38.025 2338.444 - 2353.338: 56.7078% ( 510) 00:27:38.025 2353.338 - 2368.233: 57.6128% ( 512) 00:27:38.025 2368.233 - 2383.127: 58.4930% ( 498) 00:27:38.025 2383.127 - 2398.022: 59.3821% ( 503) 00:27:38.025 2398.022 - 2412.916: 60.2552% ( 494) 00:27:38.025 2412.916 - 2427.811: 61.1708% ( 518) 00:27:38.025 2427.811 - 2442.705: 62.0829% ( 516) 00:27:38.025 2442.705 - 2457.600: 62.9560% ( 494) 00:27:38.025 2457.600 - 2472.495: 63.8363% ( 498) 00:27:38.025 2472.495 - 2487.389: 64.7554% ( 520) 00:27:38.025 2487.389 - 2502.284: 65.6674% ( 516) 00:27:38.025 2502.284 - 2517.178: 66.5406% ( 494) 00:27:38.025 2517.178 - 2532.073: 67.4243% ( 500) 00:27:38.025 2532.073 - 2546.967: 68.3329% ( 514) 00:27:38.025 2546.967 - 2561.862: 69.2060% ( 494) 00:27:38.025 2561.862 - 2576.756: 70.1039% ( 508) 00:27:38.025 2576.756 - 2591.651: 71.0018% ( 508) 00:27:38.025 2591.651 - 2606.545: 71.8962% ( 506) 00:27:38.025 2606.545 - 2621.440: 72.7870% ( 504) 00:27:38.025 2621.440 - 2636.335: 73.7026% ( 518) 00:27:38.025 2636.335 - 2651.229: 74.6094% ( 513) 00:27:38.025 2651.229 - 2666.124: 75.4984% ( 503) 00:27:38.025 2666.124 - 2681.018: 76.3698% ( 493) 00:27:38.025 2681.018 - 2695.913: 77.2784% ( 514) 00:27:38.025 2695.913 - 2710.807: 78.2063% ( 525) 00:27:38.025 2710.807 - 2725.702: 79.1077% ( 510) 00:27:38.025 2725.702 - 2740.596: 79.9791% ( 493) 00:27:38.025 2740.596 - 2755.491: 80.8682% ( 503) 00:27:38.025 2755.491 - 2770.385: 81.7644% ( 507) 00:27:38.025 2770.385 - 2785.280: 82.6198% ( 484) 00:27:38.025 2785.280 - 2800.175: 83.4594% ( 475) 00:27:38.025 2800.175 - 2815.069: 84.3255% ( 490) 00:27:38.025 2815.069 - 2829.964: 85.1757% ( 481) 00:27:38.025 2829.964 - 2844.858: 85.9852% ( 458) 00:27:38.025 2844.858 - 2859.753: 86.7983% ( 460) 00:27:38.025 2859.753 - 2874.647: 87.5477% ( 424) 00:27:38.025 2874.647 - 2889.542: 88.2883% ( 419) 00:27:38.026 2889.542 - 2904.436: 89.0095% ( 408) 00:27:38.026 2904.436 - 2919.331: 89.6652% ( 371) 00:27:38.026 2919.331 - 2934.225: 90.3068% ( 363) 00:27:38.026 2934.225 - 2949.120: 90.9272% ( 351) 00:27:38.026 2949.120 - 2964.015: 91.4911% ( 319) 00:27:38.026 2964.015 - 2978.909: 92.0408% ( 311) 00:27:38.026 2978.909 - 2993.804: 92.5269% ( 275) 00:27:38.026 2993.804 - 3008.698: 92.9794% ( 256) 00:27:38.026 3008.698 - 3023.593: 93.3912% ( 233) 00:27:38.026 3023.593 - 3038.487: 93.7624% ( 210) 00:27:38.026 3038.487 - 3053.382: 94.0876% ( 184) 00:27:38.026 3053.382 - 3068.276: 94.3704% ( 160) 00:27:38.026 3068.276 - 3083.171: 94.6585% ( 163) 00:27:38.026 3083.171 - 3098.065: 94.9201% ( 148) 00:27:38.026 3098.065 - 3112.960: 95.1729% ( 143) 00:27:38.026 3112.960 - 3127.855: 95.4150% ( 137) 00:27:38.026 3127.855 - 3142.749: 95.6377% ( 126) 00:27:38.026 3142.749 - 3157.644: 95.8569% ( 124) 00:27:38.026 3157.644 - 3172.538: 96.0619% ( 116) 00:27:38.026 3172.538 - 3187.433: 96.2581% ( 111) 00:27:38.026 3187.433 - 3202.327: 96.4508% ( 109) 00:27:38.026 3202.327 - 3217.222: 96.6346% ( 104) 00:27:38.026 3217.222 - 3232.116: 96.8131% ( 101) 00:27:38.026 3232.116 - 3247.011: 96.9828% ( 96) 00:27:38.026 3247.011 - 3261.905: 97.1454% ( 92) 00:27:38.026 3261.905 - 3276.800: 97.3169% ( 97) 00:27:38.026 3276.800 - 3291.695: 97.4742% ( 89) 00:27:38.026 3291.695 - 3306.589: 97.6209% ( 83) 00:27:38.026 3306.589 - 3321.484: 97.7605% ( 79) 00:27:38.026 3321.484 - 3336.378: 97.8984% ( 78) 00:27:38.026 3336.378 - 3351.273: 98.0451% ( 83) 00:27:38.026 3351.273 - 3366.167: 98.1777% ( 75) 00:27:38.026 3366.167 - 3381.062: 98.3049% ( 72) 00:27:38.026 3381.062 - 3395.956: 98.4340% ( 73) 00:27:38.026 3395.956 - 3410.851: 98.5612% ( 72) 00:27:38.026 3410.851 - 3425.745: 98.6761% ( 65) 00:27:38.026 3425.745 - 3440.640: 98.7751% ( 56) 00:27:38.026 3440.640 - 3455.535: 98.8776% ( 58) 00:27:38.026 3455.535 - 3470.429: 98.9678% ( 51) 00:27:38.026 3470.429 - 3485.324: 99.0561% ( 50) 00:27:38.026 3485.324 - 3500.218: 99.1410% ( 48) 00:27:38.026 3500.218 - 3515.113: 99.2134% ( 41) 00:27:38.026 3515.113 - 3530.007: 99.2771% ( 36) 00:27:38.026 3530.007 - 3544.902: 99.3354% ( 33) 00:27:38.026 3544.902 - 3559.796: 99.3725% ( 21) 00:27:38.026 3559.796 - 3574.691: 99.3937% ( 12) 00:27:38.026 3574.691 - 3589.585: 99.4149% ( 12) 00:27:38.026 3589.585 - 3604.480: 99.4362% ( 12) 00:27:38.026 3604.480 - 3619.375: 99.4521% ( 9) 00:27:38.026 3619.375 - 3634.269: 99.4627% ( 6) 00:27:38.026 3634.269 - 3649.164: 99.4715% ( 5) 00:27:38.026 3649.164 - 3664.058: 99.4786% ( 4) 00:27:38.026 3664.058 - 3678.953: 99.4856% ( 4) 00:27:38.026 3678.953 - 3693.847: 99.4910% ( 3) 00:27:38.026 3693.847 - 3708.742: 99.4963% ( 3) 00:27:38.026 3708.742 - 3723.636: 99.5016% ( 3) 00:27:38.026 3723.636 - 3738.531: 99.5051% ( 2) 00:27:38.026 3738.531 - 3753.425: 99.5069% ( 1) 00:27:38.026 3753.425 - 3768.320: 99.5104% ( 2) 00:27:38.026 3768.320 - 3783.215: 99.5139% ( 2) 00:27:38.026 3783.215 - 3798.109: 99.5175% ( 2) 00:27:38.026 3798.109 - 3813.004: 99.5192% ( 1) 00:27:38.026 3813.004 - 3842.793: 99.5263% ( 4) 00:27:38.026 3842.793 - 3872.582: 99.5334% ( 4) 00:27:38.026 3872.582 - 3902.371: 99.5404% ( 4) 00:27:38.026 3902.371 - 3932.160: 99.5475% ( 4) 00:27:38.026 3932.160 - 3961.949: 99.5546% ( 4) 00:27:38.026 3961.949 - 3991.738: 99.5617% ( 4) 00:27:38.026 3991.738 - 4021.527: 99.5687% ( 4) 00:27:38.026 4021.527 - 4051.316: 99.5811% ( 7) 00:27:38.026 4051.316 - 4081.105: 99.5899% ( 5) 00:27:38.026 4081.105 - 4110.895: 99.6005% ( 6) 00:27:38.026 4110.895 - 4140.684: 99.6094% ( 5) 00:27:38.026 4140.684 - 4170.473: 99.6200% ( 6) 00:27:38.026 4170.473 - 4200.262: 99.6288% ( 5) 00:27:38.026 4200.262 - 4230.051: 99.6359% ( 4) 00:27:38.026 4230.051 - 4259.840: 99.6465% ( 6) 00:27:38.026 4259.840 - 4289.629: 99.6536% ( 4) 00:27:38.026 4289.629 - 4319.418: 99.6642% ( 6) 00:27:38.026 4319.418 - 4349.207: 99.6748% ( 6) 00:27:38.026 4349.207 - 4378.996: 99.6836% ( 5) 00:27:38.026 4378.996 - 4408.785: 99.6924% ( 5) 00:27:38.026 4408.785 - 4438.575: 99.6978% ( 3) 00:27:38.026 4438.575 - 4468.364: 99.7031% ( 3) 00:27:38.026 4468.364 - 4498.153: 99.7101% ( 4) 00:27:38.026 4498.153 - 4527.942: 99.7154% ( 3) 00:27:38.026 4527.942 - 4557.731: 99.7225% ( 4) 00:27:38.026 4557.731 - 4587.520: 99.7296% ( 4) 00:27:38.026 4587.520 - 4617.309: 99.7349% ( 3) 00:27:38.026 4617.309 - 4647.098: 99.7419% ( 4) 00:27:38.026 4647.098 - 4676.887: 99.7490% ( 4) 00:27:38.026 4676.887 - 4706.676: 99.7543% ( 3) 00:27:38.026 4706.676 - 4736.465: 99.7596% ( 3) 00:27:38.026 4736.465 - 4766.255: 99.7649% ( 3) 00:27:38.026 4766.255 - 4796.044: 99.7720% ( 4) 00:27:38.026 4796.044 - 4825.833: 99.7791% ( 4) 00:27:38.026 4825.833 - 4855.622: 99.7844% ( 3) 00:27:38.026 4855.622 - 4885.411: 99.7897% ( 3) 00:27:38.026 4885.411 - 4915.200: 99.7932% ( 2) 00:27:38.026 4915.200 - 4944.989: 99.8003% ( 4) 00:27:38.026 4944.989 - 4974.778: 99.8073% ( 4) 00:27:38.026 4974.778 - 5004.567: 99.8126% ( 3) 00:27:38.026 5004.567 - 5034.356: 99.8197% ( 4) 00:27:38.026 5034.356 - 5064.145: 99.8268% ( 4) 00:27:38.026 5064.145 - 5093.935: 99.8321% ( 3) 00:27:38.026 5093.935 - 5123.724: 99.8392% ( 4) 00:27:38.026 5123.724 - 5153.513: 99.8445% ( 3) 00:27:38.026 5153.513 - 5183.302: 99.8498% ( 3) 00:27:38.026 5183.302 - 5213.091: 99.8551% ( 3) 00:27:38.026 5213.091 - 5242.880: 99.8621% ( 4) 00:27:38.026 5242.880 - 5272.669: 99.8692% ( 4) 00:27:38.026 5272.669 - 5302.458: 99.8745% ( 3) 00:27:38.026 5302.458 - 5332.247: 99.8816% ( 4) 00:27:38.026 5332.247 - 5362.036: 99.8886% ( 4) 00:27:38.026 5362.036 - 5391.825: 99.8939% ( 3) 00:27:38.026 5391.825 - 5421.615: 99.9010% ( 4) 00:27:38.026 5421.615 - 5451.404: 99.9046% ( 2) 00:27:38.026 5451.404 - 5481.193: 99.9116% ( 4) 00:27:38.026 5481.193 - 5510.982: 99.9187% ( 4) 00:27:38.026 5510.982 - 5540.771: 99.9222% ( 2) 00:27:38.026 5540.771 - 5570.560: 99.9258% ( 2) 00:27:38.026 5570.560 - 5600.349: 99.9275% ( 1) 00:27:38.026 5600.349 - 5630.138: 99.9311% ( 2) 00:27:38.026 5630.138 - 5659.927: 99.9346% ( 2) 00:27:38.026 5659.927 - 5689.716: 99.9364% ( 1) 00:27:38.026 5689.716 - 5719.505: 99.9399% ( 2) 00:27:38.026 5719.505 - 5749.295: 99.9434% ( 2) 00:27:38.026 5749.295 - 5779.084: 99.9452% ( 1) 00:27:38.026 5779.084 - 5808.873: 99.9487% ( 2) 00:27:38.026 5808.873 - 5838.662: 99.9505% ( 1) 00:27:38.026 5838.662 - 5868.451: 99.9540% ( 2) 00:27:38.026 5868.451 - 5898.240: 99.9558% ( 1) 00:27:38.026 5898.240 - 5928.029: 99.9593% ( 2) 00:27:38.026 5928.029 - 5957.818: 99.9611% ( 1) 00:27:38.026 5957.818 - 5987.607: 99.9629% ( 1) 00:27:38.026 5987.607 - 6017.396: 99.9664% ( 2) 00:27:38.026 6017.396 - 6047.185: 99.9700% ( 2) 00:27:38.026 6047.185 - 6076.975: 99.9717% ( 1) 00:27:38.026 6076.975 - 6106.764: 99.9753% ( 2) 00:27:38.026 6106.764 - 6136.553: 99.9770% ( 1) 00:27:38.026 6136.553 - 6166.342: 99.9806% ( 2) 00:27:38.026 6166.342 - 6196.131: 99.9823% ( 1) 00:27:38.026 6196.131 - 6225.920: 99.9859% ( 2) 00:27:38.026 6225.920 - 6255.709: 99.9876% ( 1) 00:27:38.026 6255.709 - 6285.498: 99.9912% ( 2) 00:27:38.026 6285.498 - 6315.287: 99.9929% ( 1) 00:27:38.026 6315.287 - 6345.076: 99.9965% ( 2) 00:27:38.026 6345.076 - 6374.865: 99.9982% ( 1) 00:27:38.026 6404.655 - 6434.444: 100.0000% ( 1) 00:27:38.026 00:27:38.026 21:24:00 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:27:39.404 Initializing NVMe Controllers 00:27:39.404 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:39.404 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:27:39.404 Initialization complete. Launching workers. 00:27:39.404 ======================================================== 00:27:39.404 Latency(us) 00:27:39.404 Device Information : IOPS MiB/s Average min max 00:27:39.404 PCIE (0000:00:06.0) NSID 1 from core 0: 56447.94 661.50 2266.90 1035.43 5266.63 00:27:39.404 ======================================================== 00:27:39.404 Total : 56447.94 661.50 2266.90 1035.43 5266.63 00:27:39.404 00:27:39.404 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:27:39.404 ================================================================================= 00:27:39.404 1.00000% : 1541.585us 00:27:39.404 10.00000% : 1809.687us 00:27:39.404 25.00000% : 1980.975us 00:27:39.404 50.00000% : 2174.604us 00:27:39.404 75.00000% : 2457.600us 00:27:39.404 90.00000% : 2874.647us 00:27:39.404 95.00000% : 3202.327us 00:27:39.404 98.00000% : 3589.585us 00:27:39.404 99.00000% : 3842.793us 00:27:39.404 99.50000% : 4110.895us 00:27:39.404 99.90000% : 4825.833us 00:27:39.404 99.99000% : 5213.091us 00:27:39.404 99.99900% : 5272.669us 00:27:39.404 99.99990% : 5272.669us 00:27:39.404 99.99999% : 5272.669us 00:27:39.404 00:27:39.404 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:27:39.404 ============================================================================== 00:27:39.404 Range in us Cumulative IO count 00:27:39.404 1035.171 - 1042.618: 0.0018% ( 1) 00:27:39.404 1042.618 - 1050.065: 0.0035% ( 1) 00:27:39.404 1094.749 - 1102.196: 0.0071% ( 2) 00:27:39.404 1124.538 - 1131.985: 0.0089% ( 1) 00:27:39.404 1131.985 - 1139.433: 0.0106% ( 1) 00:27:39.404 1139.433 - 1146.880: 0.0124% ( 1) 00:27:39.404 1184.116 - 1191.564: 0.0142% ( 1) 00:27:39.404 1199.011 - 1206.458: 0.0159% ( 1) 00:27:39.404 1206.458 - 1213.905: 0.0177% ( 1) 00:27:39.404 1213.905 - 1221.353: 0.0230% ( 3) 00:27:39.404 1221.353 - 1228.800: 0.0319% ( 5) 00:27:39.404 1228.800 - 1236.247: 0.0354% ( 2) 00:27:39.404 1236.247 - 1243.695: 0.0372% ( 1) 00:27:39.404 1243.695 - 1251.142: 0.0407% ( 2) 00:27:39.404 1251.142 - 1258.589: 0.0461% ( 3) 00:27:39.404 1258.589 - 1266.036: 0.0478% ( 1) 00:27:39.404 1266.036 - 1273.484: 0.0585% ( 6) 00:27:39.404 1273.484 - 1280.931: 0.0638% ( 3) 00:27:39.404 1280.931 - 1288.378: 0.0709% ( 4) 00:27:39.404 1288.378 - 1295.825: 0.0762% ( 3) 00:27:39.404 1295.825 - 1303.273: 0.0797% ( 2) 00:27:39.404 1303.273 - 1310.720: 0.0868% ( 4) 00:27:39.404 1310.720 - 1318.167: 0.0939% ( 4) 00:27:39.404 1318.167 - 1325.615: 0.0992% ( 3) 00:27:39.404 1325.615 - 1333.062: 0.1081% ( 5) 00:27:39.404 1333.062 - 1340.509: 0.1152% ( 4) 00:27:39.404 1340.509 - 1347.956: 0.1293% ( 8) 00:27:39.404 1347.956 - 1355.404: 0.1400% ( 6) 00:27:39.404 1355.404 - 1362.851: 0.1470% ( 4) 00:27:39.404 1362.851 - 1370.298: 0.1630% ( 9) 00:27:39.404 1370.298 - 1377.745: 0.1683% ( 3) 00:27:39.404 1377.745 - 1385.193: 0.1825% ( 8) 00:27:39.404 1385.193 - 1392.640: 0.1931% ( 6) 00:27:39.404 1392.640 - 1400.087: 0.2197% ( 15) 00:27:39.404 1400.087 - 1407.535: 0.2356% ( 9) 00:27:39.404 1407.535 - 1414.982: 0.2640% ( 16) 00:27:39.404 1414.982 - 1422.429: 0.2888% ( 14) 00:27:39.404 1422.429 - 1429.876: 0.3100% ( 12) 00:27:39.404 1429.876 - 1437.324: 0.3295% ( 11) 00:27:39.404 1437.324 - 1444.771: 0.3508% ( 12) 00:27:39.404 1444.771 - 1452.218: 0.3880% ( 21) 00:27:39.404 1452.218 - 1459.665: 0.4110% ( 13) 00:27:39.404 1459.665 - 1467.113: 0.4482% ( 21) 00:27:39.404 1467.113 - 1474.560: 0.4765% ( 16) 00:27:39.404 1474.560 - 1482.007: 0.5191% ( 24) 00:27:39.404 1482.007 - 1489.455: 0.5758% ( 32) 00:27:39.404 1489.455 - 1496.902: 0.6378% ( 35) 00:27:39.404 1496.902 - 1504.349: 0.6927% ( 31) 00:27:39.404 1504.349 - 1511.796: 0.7600% ( 38) 00:27:39.404 1511.796 - 1519.244: 0.8291% ( 39) 00:27:39.404 1519.244 - 1526.691: 0.8751% ( 26) 00:27:39.404 1526.691 - 1534.138: 0.9177% ( 24) 00:27:39.404 1534.138 - 1541.585: 1.0346% ( 66) 00:27:39.404 1541.585 - 1549.033: 1.1072% ( 41) 00:27:39.404 1549.033 - 1556.480: 1.1550% ( 27) 00:27:39.405 1556.480 - 1563.927: 1.2029% ( 27) 00:27:39.405 1563.927 - 1571.375: 1.2578% ( 31) 00:27:39.405 1571.375 - 1578.822: 1.3074% ( 28) 00:27:39.405 1578.822 - 1586.269: 1.3995% ( 52) 00:27:39.405 1586.269 - 1593.716: 1.4863% ( 49) 00:27:39.405 1593.716 - 1601.164: 1.6086% ( 69) 00:27:39.405 1601.164 - 1608.611: 1.7290% ( 68) 00:27:39.405 1608.611 - 1616.058: 1.8530% ( 70) 00:27:39.405 1616.058 - 1623.505: 1.9912% ( 78) 00:27:39.405 1623.505 - 1630.953: 2.2268% ( 133) 00:27:39.405 1630.953 - 1638.400: 2.4996% ( 154) 00:27:39.405 1638.400 - 1645.847: 2.8061% ( 173) 00:27:39.405 1645.847 - 1653.295: 3.0187% ( 120) 00:27:39.405 1653.295 - 1660.742: 3.2118% ( 109) 00:27:39.405 1660.742 - 1668.189: 3.4563% ( 138) 00:27:39.405 1668.189 - 1675.636: 3.6830% ( 128) 00:27:39.405 1675.636 - 1683.084: 3.8868% ( 115) 00:27:39.405 1683.084 - 1690.531: 4.1171% ( 130) 00:27:39.405 1690.531 - 1697.978: 4.3208% ( 115) 00:27:39.405 1697.978 - 1705.425: 4.7141% ( 222) 00:27:39.405 1705.425 - 1712.873: 5.1003% ( 218) 00:27:39.405 1712.873 - 1720.320: 5.3713% ( 153) 00:27:39.405 1720.320 - 1727.767: 5.7788% ( 230) 00:27:39.405 1727.767 - 1735.215: 6.1047% ( 184) 00:27:39.405 1735.215 - 1742.662: 6.3970% ( 165) 00:27:39.405 1742.662 - 1750.109: 6.7230% ( 184) 00:27:39.405 1750.109 - 1757.556: 7.1588% ( 246) 00:27:39.405 1757.556 - 1765.004: 7.4954% ( 190) 00:27:39.405 1765.004 - 1772.451: 7.9259% ( 243) 00:27:39.405 1772.451 - 1779.898: 8.3386% ( 233) 00:27:39.405 1779.898 - 1787.345: 8.7160% ( 213) 00:27:39.405 1787.345 - 1794.793: 9.1961% ( 271) 00:27:39.405 1794.793 - 1802.240: 9.6177% ( 238) 00:27:39.405 1802.240 - 1809.687: 10.0925% ( 268) 00:27:39.405 1809.687 - 1817.135: 10.5832% ( 277) 00:27:39.405 1817.135 - 1824.582: 11.0668% ( 273) 00:27:39.405 1824.582 - 1832.029: 11.5434% ( 269) 00:27:39.405 1832.029 - 1839.476: 12.0040% ( 260) 00:27:39.405 1839.476 - 1846.924: 12.5106% ( 286) 00:27:39.405 1846.924 - 1854.371: 13.1413% ( 356) 00:27:39.405 1854.371 - 1861.818: 13.8375% ( 393) 00:27:39.405 1861.818 - 1869.265: 14.4115% ( 324) 00:27:39.405 1869.265 - 1876.713: 15.0298% ( 349) 00:27:39.405 1876.713 - 1884.160: 15.7100% ( 384) 00:27:39.405 1884.160 - 1891.607: 16.4594% ( 423) 00:27:39.405 1891.607 - 1899.055: 17.2141% ( 426) 00:27:39.405 1899.055 - 1906.502: 17.9794% ( 432) 00:27:39.405 1906.502 - 1921.396: 19.3328% ( 764) 00:27:39.405 1921.396 - 1936.291: 21.1664% ( 1035) 00:27:39.405 1936.291 - 1951.185: 22.8724% ( 963) 00:27:39.405 1951.185 - 1966.080: 24.7024% ( 1033) 00:27:39.405 1966.080 - 1980.975: 26.3287% ( 918) 00:27:39.405 1980.975 - 1995.869: 28.1817% ( 1046) 00:27:39.405 1995.869 - 2010.764: 29.9320% ( 988) 00:27:39.405 2010.764 - 2025.658: 31.8116% ( 1061) 00:27:39.405 2025.658 - 2040.553: 33.8081% ( 1127) 00:27:39.405 2040.553 - 2055.447: 35.8542% ( 1155) 00:27:39.405 2055.447 - 2070.342: 38.1200% ( 1279) 00:27:39.405 2070.342 - 2085.236: 40.1981% ( 1173) 00:27:39.405 2085.236 - 2100.131: 42.1822% ( 1120) 00:27:39.405 2100.131 - 2115.025: 44.0671% ( 1064) 00:27:39.405 2115.025 - 2129.920: 46.0264% ( 1106) 00:27:39.405 2129.920 - 2144.815: 47.8706% ( 1041) 00:27:39.405 2144.815 - 2159.709: 49.6740% ( 1018) 00:27:39.405 2159.709 - 2174.604: 51.4668% ( 1012) 00:27:39.405 2174.604 - 2189.498: 53.3305% ( 1052) 00:27:39.405 2189.498 - 2204.393: 55.1127% ( 1006) 00:27:39.405 2204.393 - 2219.287: 56.5777% ( 827) 00:27:39.405 2219.287 - 2234.182: 58.1119% ( 866) 00:27:39.405 2234.182 - 2249.076: 59.7080% ( 901) 00:27:39.405 2249.076 - 2263.971: 61.1554% ( 817) 00:27:39.405 2263.971 - 2278.865: 62.6364% ( 836) 00:27:39.405 2278.865 - 2293.760: 64.0820% ( 816) 00:27:39.405 2293.760 - 2308.655: 65.3805% ( 733) 00:27:39.405 2308.655 - 2323.549: 66.5090% ( 637) 00:27:39.405 2323.549 - 2338.444: 67.5950% ( 613) 00:27:39.405 2338.444 - 2353.338: 68.8581% ( 713) 00:27:39.405 2353.338 - 2368.233: 69.8058% ( 535) 00:27:39.405 2368.233 - 2383.127: 70.7341% ( 524) 00:27:39.405 2383.127 - 2398.022: 71.6394% ( 511) 00:27:39.405 2398.022 - 2412.916: 72.5252% ( 500) 00:27:39.405 2412.916 - 2427.811: 73.4588% ( 527) 00:27:39.405 2427.811 - 2442.705: 74.3197% ( 486) 00:27:39.405 2442.705 - 2457.600: 75.0673% ( 422) 00:27:39.405 2457.600 - 2472.495: 75.9318% ( 488) 00:27:39.405 2472.495 - 2487.389: 76.7822% ( 480) 00:27:39.405 2487.389 - 2502.284: 77.5776% ( 449) 00:27:39.405 2502.284 - 2517.178: 78.3216% ( 420) 00:27:39.405 2517.178 - 2532.073: 79.0781% ( 427) 00:27:39.405 2532.073 - 2546.967: 79.7743% ( 393) 00:27:39.405 2546.967 - 2561.862: 80.4510% ( 382) 00:27:39.405 2561.862 - 2576.756: 81.1295% ( 383) 00:27:39.405 2576.756 - 2591.651: 81.7212% ( 334) 00:27:39.405 2591.651 - 2606.545: 82.3129% ( 334) 00:27:39.405 2606.545 - 2621.440: 82.8639% ( 311) 00:27:39.405 2621.440 - 2636.335: 83.4414% ( 326) 00:27:39.405 2636.335 - 2651.229: 83.9569% ( 291) 00:27:39.405 2651.229 - 2666.124: 84.5096% ( 312) 00:27:39.405 2666.124 - 2681.018: 85.0641% ( 313) 00:27:39.405 2681.018 - 2695.913: 85.5956% ( 300) 00:27:39.405 2695.913 - 2710.807: 86.0721% ( 269) 00:27:39.405 2710.807 - 2725.702: 86.5398% ( 264) 00:27:39.405 2725.702 - 2740.596: 86.9739% ( 245) 00:27:39.405 2740.596 - 2755.491: 87.4221% ( 253) 00:27:39.405 2755.491 - 2770.385: 87.8490% ( 241) 00:27:39.405 2770.385 - 2785.280: 88.2228% ( 211) 00:27:39.405 2785.280 - 2800.175: 88.5842% ( 204) 00:27:39.405 2800.175 - 2815.069: 88.9296% ( 195) 00:27:39.405 2815.069 - 2829.964: 89.2680% ( 191) 00:27:39.405 2829.964 - 2844.858: 89.6117% ( 194) 00:27:39.405 2844.858 - 2859.753: 89.9323% ( 181) 00:27:39.405 2859.753 - 2874.647: 90.2282% ( 167) 00:27:39.405 2874.647 - 2889.542: 90.5293% ( 170) 00:27:39.405 2889.542 - 2904.436: 90.8358% ( 173) 00:27:39.405 2904.436 - 2919.331: 91.1334% ( 168) 00:27:39.405 2919.331 - 2934.225: 91.4045% ( 153) 00:27:39.405 2934.225 - 2949.120: 91.6614% ( 145) 00:27:39.405 2949.120 - 2964.015: 91.9023% ( 136) 00:27:39.405 2964.015 - 2978.909: 92.1503% ( 140) 00:27:39.405 2978.909 - 2993.804: 92.3859% ( 133) 00:27:39.405 2993.804 - 3008.698: 92.6109% ( 127) 00:27:39.405 3008.698 - 3023.593: 92.8253% ( 121) 00:27:39.405 3023.593 - 3038.487: 93.0485% ( 126) 00:27:39.405 3038.487 - 3053.382: 93.2646% ( 122) 00:27:39.405 3053.382 - 3068.276: 93.4790% ( 121) 00:27:39.405 3068.276 - 3083.171: 93.6685% ( 107) 00:27:39.405 3083.171 - 3098.065: 93.8634% ( 110) 00:27:39.405 3098.065 - 3112.960: 94.0388% ( 99) 00:27:39.405 3112.960 - 3127.855: 94.2106% ( 97) 00:27:39.405 3127.855 - 3142.749: 94.3895% ( 101) 00:27:39.405 3142.749 - 3157.644: 94.5366% ( 83) 00:27:39.405 3157.644 - 3172.538: 94.6978% ( 91) 00:27:39.405 3172.538 - 3187.433: 94.8678% ( 96) 00:27:39.405 3187.433 - 3202.327: 95.0167% ( 84) 00:27:39.405 3202.327 - 3217.222: 95.1601% ( 81) 00:27:39.405 3217.222 - 3232.116: 95.3160% ( 88) 00:27:39.405 3232.116 - 3247.011: 95.4525% ( 77) 00:27:39.405 3247.011 - 3261.905: 95.5906% ( 78) 00:27:39.405 3261.905 - 3276.800: 95.7306% ( 79) 00:27:39.405 3276.800 - 3291.695: 95.8670% ( 77) 00:27:39.405 3291.695 - 3306.589: 95.9839% ( 66) 00:27:39.405 3306.589 - 3321.484: 96.1115% ( 72) 00:27:39.405 3321.484 - 3336.378: 96.2514% ( 79) 00:27:39.405 3336.378 - 3351.273: 96.3843% ( 75) 00:27:39.405 3351.273 - 3366.167: 96.5047% ( 68) 00:27:39.405 3366.167 - 3381.062: 96.6323% ( 72) 00:27:39.405 3381.062 - 3395.956: 96.7634% ( 74) 00:27:39.405 3395.956 - 3410.851: 96.8750% ( 63) 00:27:39.405 3410.851 - 3425.745: 96.9866% ( 63) 00:27:39.405 3425.745 - 3440.640: 97.0929% ( 60) 00:27:39.405 3440.640 - 3455.535: 97.2098% ( 66) 00:27:39.405 3455.535 - 3470.429: 97.3126% ( 58) 00:27:39.405 3470.429 - 3485.324: 97.4242% ( 63) 00:27:39.405 3485.324 - 3500.218: 97.5181% ( 53) 00:27:39.405 3500.218 - 3515.113: 97.6102% ( 52) 00:27:39.405 3515.113 - 3530.007: 97.7059% ( 54) 00:27:39.405 3530.007 - 3544.902: 97.7944% ( 50) 00:27:39.405 3544.902 - 3559.796: 97.8724% ( 44) 00:27:39.405 3559.796 - 3574.691: 97.9450% ( 41) 00:27:39.405 3574.691 - 3589.585: 98.0088% ( 36) 00:27:39.405 3589.585 - 3604.480: 98.0796% ( 40) 00:27:39.405 3604.480 - 3619.375: 98.1363% ( 32) 00:27:39.405 3619.375 - 3634.269: 98.2037% ( 38) 00:27:39.405 3634.269 - 3649.164: 98.2692% ( 37) 00:27:39.405 3649.164 - 3664.058: 98.3312% ( 35) 00:27:39.405 3664.058 - 3678.953: 98.3914% ( 34) 00:27:39.405 3678.953 - 3693.847: 98.4570% ( 37) 00:27:39.405 3693.847 - 3708.742: 98.5137% ( 32) 00:27:39.405 3708.742 - 3723.636: 98.5828% ( 39) 00:27:39.405 3723.636 - 3738.531: 98.6359% ( 30) 00:27:39.405 3738.531 - 3753.425: 98.6979% ( 35) 00:27:39.405 3753.425 - 3768.320: 98.7581% ( 34) 00:27:39.405 3768.320 - 3783.215: 98.8166% ( 33) 00:27:39.405 3783.215 - 3798.109: 98.8662% ( 28) 00:27:39.405 3798.109 - 3813.004: 98.9211% ( 31) 00:27:39.405 3813.004 - 3842.793: 99.0257% ( 59) 00:27:39.405 3842.793 - 3872.582: 99.1160% ( 51) 00:27:39.405 3872.582 - 3902.371: 99.1886% ( 41) 00:27:39.405 3902.371 - 3932.160: 99.2489% ( 34) 00:27:39.405 3932.160 - 3961.949: 99.3091% ( 34) 00:27:39.405 3961.949 - 3991.738: 99.3622% ( 30) 00:27:39.405 3991.738 - 4021.527: 99.4065% ( 25) 00:27:39.405 4021.527 - 4051.316: 99.4455% ( 22) 00:27:39.405 4051.316 - 4081.105: 99.4863% ( 23) 00:27:39.405 4081.105 - 4110.895: 99.5164% ( 17) 00:27:39.405 4110.895 - 4140.684: 99.5465% ( 17) 00:27:39.405 4140.684 - 4170.473: 99.5695% ( 13) 00:27:39.406 4170.473 - 4200.262: 99.5908% ( 12) 00:27:39.406 4200.262 - 4230.051: 99.6085% ( 10) 00:27:39.406 4230.051 - 4259.840: 99.6262% ( 10) 00:27:39.406 4259.840 - 4289.629: 99.6386% ( 7) 00:27:39.406 4289.629 - 4319.418: 99.6528% ( 8) 00:27:39.406 4319.418 - 4349.207: 99.6652% ( 7) 00:27:39.406 4349.207 - 4378.996: 99.6740% ( 5) 00:27:39.406 4378.996 - 4408.785: 99.6900% ( 9) 00:27:39.406 4408.785 - 4438.575: 99.7059% ( 9) 00:27:39.406 4438.575 - 4468.364: 99.7219% ( 9) 00:27:39.406 4468.364 - 4498.153: 99.7396% ( 10) 00:27:39.406 4498.153 - 4527.942: 99.7591% ( 11) 00:27:39.406 4527.942 - 4557.731: 99.7768% ( 10) 00:27:39.406 4557.731 - 4587.520: 99.7945% ( 10) 00:27:39.406 4587.520 - 4617.309: 99.8104% ( 9) 00:27:39.406 4617.309 - 4647.098: 99.8282% ( 10) 00:27:39.406 4647.098 - 4676.887: 99.8423% ( 8) 00:27:39.406 4676.887 - 4706.676: 99.8565% ( 8) 00:27:39.406 4706.676 - 4736.465: 99.8689% ( 7) 00:27:39.406 4736.465 - 4766.255: 99.8813% ( 7) 00:27:39.406 4766.255 - 4796.044: 99.8937% ( 7) 00:27:39.406 4796.044 - 4825.833: 99.9043% ( 6) 00:27:39.406 4825.833 - 4855.622: 99.9150% ( 6) 00:27:39.406 4855.622 - 4885.411: 99.9274% ( 7) 00:27:39.406 4885.411 - 4915.200: 99.9380% ( 6) 00:27:39.406 4915.200 - 4944.989: 99.9504% ( 7) 00:27:39.406 4944.989 - 4974.778: 99.9593% ( 5) 00:27:39.406 4974.778 - 5004.567: 99.9628% ( 2) 00:27:39.406 5004.567 - 5034.356: 99.9699% ( 4) 00:27:39.406 5034.356 - 5064.145: 99.9734% ( 2) 00:27:39.406 5064.145 - 5093.935: 99.9787% ( 3) 00:27:39.406 5093.935 - 5123.724: 99.9823% ( 2) 00:27:39.406 5123.724 - 5153.513: 99.9858% ( 2) 00:27:39.406 5153.513 - 5183.302: 99.9894% ( 2) 00:27:39.406 5183.302 - 5213.091: 99.9929% ( 2) 00:27:39.406 5213.091 - 5242.880: 99.9965% ( 2) 00:27:39.406 5242.880 - 5272.669: 100.0000% ( 2) 00:27:39.406 00:27:39.406 21:24:01 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:27:39.406 00:27:39.406 real 0m2.629s 00:27:39.406 user 0m2.230s 00:27:39.406 sys 0m0.227s 00:27:39.406 21:24:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.406 21:24:01 -- common/autotest_common.sh@10 -- # set +x 00:27:39.406 ************************************ 00:27:39.406 END TEST nvme_perf 00:27:39.406 ************************************ 00:27:39.406 21:24:01 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:27:39.406 21:24:01 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:27:39.406 21:24:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:39.406 21:24:01 -- common/autotest_common.sh@10 -- # set +x 00:27:39.406 ************************************ 00:27:39.406 START TEST nvme_hello_world 00:27:39.406 ************************************ 00:27:39.406 21:24:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:27:39.406 Initializing NVMe Controllers 00:27:39.406 Attached to 0000:00:06.0 00:27:39.406 Namespace ID: 1 size: 5GB 00:27:39.406 Initialization complete. 00:27:39.406 INFO: using host memory buffer for IO 00:27:39.406 Hello world! 00:27:39.406 00:27:39.406 real 0m0.311s 00:27:39.406 user 0m0.093s 00:27:39.406 sys 0m0.121s 00:27:39.406 21:24:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.406 ************************************ 00:27:39.406 END TEST nvme_hello_world 00:27:39.406 ************************************ 00:27:39.406 21:24:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.665 21:24:02 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:27:39.665 21:24:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:39.665 21:24:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:39.665 21:24:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.665 ************************************ 00:27:39.665 START TEST nvme_sgl 00:27:39.665 ************************************ 00:27:39.665 21:24:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:27:39.924 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:27:39.924 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:27:39.924 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:27:39.924 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:27:39.924 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:27:39.924 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:27:39.924 NVMe Readv/Writev Request test 00:27:39.924 Attached to 0000:00:06.0 00:27:39.924 0000:00:06.0: build_io_request_2 test passed 00:27:39.924 0000:00:06.0: build_io_request_4 test passed 00:27:39.924 0000:00:06.0: build_io_request_5 test passed 00:27:39.924 0000:00:06.0: build_io_request_6 test passed 00:27:39.924 0000:00:06.0: build_io_request_7 test passed 00:27:39.924 0000:00:06.0: build_io_request_10 test passed 00:27:39.924 Cleaning up... 00:27:39.924 00:27:39.924 real 0m0.352s 00:27:39.924 user 0m0.159s 00:27:39.924 sys 0m0.109s 00:27:39.924 21:24:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.924 21:24:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.924 ************************************ 00:27:39.924 END TEST nvme_sgl 00:27:39.924 ************************************ 00:27:39.924 21:24:02 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:27:39.924 21:24:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:39.924 21:24:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:39.924 21:24:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.924 ************************************ 00:27:39.924 START TEST nvme_e2edp 00:27:39.924 ************************************ 00:27:39.924 21:24:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:27:40.182 NVMe Write/Read with End-to-End data protection test 00:27:40.183 Attached to 0000:00:06.0 00:27:40.183 Cleaning up... 00:27:40.183 00:27:40.183 real 0m0.301s 00:27:40.183 user 0m0.097s 00:27:40.183 sys 0m0.125s 00:27:40.183 ************************************ 00:27:40.183 END TEST nvme_e2edp 00:27:40.183 21:24:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:40.183 21:24:02 -- common/autotest_common.sh@10 -- # set +x 00:27:40.183 ************************************ 00:27:40.183 21:24:02 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:27:40.183 21:24:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:40.183 21:24:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:40.183 21:24:02 -- common/autotest_common.sh@10 -- # set +x 00:27:40.183 ************************************ 00:27:40.183 START TEST nvme_reserve 00:27:40.183 ************************************ 00:27:40.183 21:24:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:27:40.749 ===================================================== 00:27:40.749 NVMe Controller at PCI bus 0, device 6, function 0 00:27:40.749 ===================================================== 00:27:40.749 Reservations: Not Supported 00:27:40.749 Reservation test passed 00:27:40.749 00:27:40.749 real 0m0.301s 00:27:40.749 user 0m0.076s 00:27:40.749 sys 0m0.144s 00:27:40.749 21:24:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:40.749 21:24:03 -- common/autotest_common.sh@10 -- # set +x 00:27:40.749 ************************************ 00:27:40.749 END TEST nvme_reserve 00:27:40.749 ************************************ 00:27:40.749 21:24:03 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:27:40.749 21:24:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:40.749 21:24:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:40.749 21:24:03 -- common/autotest_common.sh@10 -- # set +x 00:27:40.749 ************************************ 00:27:40.749 START TEST nvme_err_injection 00:27:40.749 ************************************ 00:27:40.749 21:24:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:27:41.007 NVMe Error Injection test 00:27:41.007 Attached to 0000:00:06.0 00:27:41.007 0000:00:06.0: get features failed as expected 00:27:41.007 0000:00:06.0: get features successfully as expected 00:27:41.007 0000:00:06.0: read failed as expected 00:27:41.007 0000:00:06.0: read successfully as expected 00:27:41.007 Cleaning up... 00:27:41.007 00:27:41.007 real 0m0.301s 00:27:41.007 user 0m0.095s 00:27:41.007 sys 0m0.106s 00:27:41.007 21:24:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.007 21:24:03 -- common/autotest_common.sh@10 -- # set +x 00:27:41.007 ************************************ 00:27:41.007 END TEST nvme_err_injection 00:27:41.007 ************************************ 00:27:41.007 21:24:03 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:27:41.007 21:24:03 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:27:41.007 21:24:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:41.007 21:24:03 -- common/autotest_common.sh@10 -- # set +x 00:27:41.007 ************************************ 00:27:41.007 START TEST nvme_overhead 00:27:41.007 ************************************ 00:27:41.007 21:24:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:27:42.379 Initializing NVMe Controllers 00:27:42.379 Attached to 0000:00:06.0 00:27:42.379 Initialization complete. Launching workers. 00:27:42.379 submit (in ns) avg, min, max = 15198.9, 11861.4, 202176.4 00:27:42.379 complete (in ns) avg, min, max = 10098.9, 8193.6, 145192.7 00:27:42.379 00:27:42.379 Submit histogram 00:27:42.379 ================ 00:27:42.379 Range in us Cumulative Count 00:27:42.379 11.811 - 11.869: 0.0119% ( 1) 00:27:42.379 12.684 - 12.742: 0.0238% ( 1) 00:27:42.379 12.742 - 12.800: 0.0831% ( 5) 00:27:42.379 12.800 - 12.858: 0.1188% ( 3) 00:27:42.379 12.858 - 12.916: 0.2019% ( 7) 00:27:42.379 12.916 - 12.975: 0.3444% ( 12) 00:27:42.379 12.975 - 13.033: 0.3919% ( 4) 00:27:42.379 13.033 - 13.091: 0.4988% ( 9) 00:27:42.379 13.091 - 13.149: 0.7838% ( 24) 00:27:42.379 13.149 - 13.207: 1.3656% ( 49) 00:27:42.379 13.207 - 13.265: 2.3038% ( 79) 00:27:42.379 13.265 - 13.324: 3.5388% ( 104) 00:27:42.379 13.324 - 13.382: 4.3107% ( 65) 00:27:42.379 13.382 - 13.440: 5.0588% ( 63) 00:27:42.379 13.440 - 13.498: 5.9850% ( 78) 00:27:42.379 13.498 - 13.556: 7.8494% ( 157) 00:27:42.379 13.556 - 13.615: 10.0344% ( 184) 00:27:42.379 13.615 - 13.673: 11.9582% ( 162) 00:27:42.379 13.673 - 13.731: 13.6088% ( 139) 00:27:42.379 13.731 - 13.789: 14.8320% ( 103) 00:27:42.379 13.789 - 13.847: 17.6345% ( 236) 00:27:42.379 13.847 - 13.905: 25.2345% ( 640) 00:27:42.379 13.905 - 13.964: 37.6440% ( 1045) 00:27:42.379 13.964 - 14.022: 50.4334% ( 1077) 00:27:42.379 14.022 - 14.080: 58.9597% ( 718) 00:27:42.379 14.080 - 14.138: 64.0185% ( 426) 00:27:42.379 14.138 - 14.196: 67.2604% ( 273) 00:27:42.379 14.196 - 14.255: 70.0036% ( 231) 00:27:42.379 14.255 - 14.313: 71.7967% ( 151) 00:27:42.379 14.313 - 14.371: 73.0198% ( 103) 00:27:42.379 14.371 - 14.429: 73.8630% ( 71) 00:27:42.379 14.429 - 14.487: 74.4448% ( 49) 00:27:42.379 14.487 - 14.545: 74.9555% ( 43) 00:27:42.379 14.545 - 14.604: 75.4186% ( 39) 00:27:42.379 14.604 - 14.662: 75.8698% ( 38) 00:27:42.379 14.662 - 14.720: 76.2380% ( 31) 00:27:42.379 14.720 - 14.778: 76.5230% ( 24) 00:27:42.379 14.778 - 14.836: 76.7605% ( 20) 00:27:42.379 14.836 - 14.895: 76.9742% ( 18) 00:27:42.379 14.895 - 15.011: 77.3424% ( 31) 00:27:42.379 15.011 - 15.127: 77.5324% ( 16) 00:27:42.379 15.127 - 15.244: 77.6630% ( 11) 00:27:42.379 15.244 - 15.360: 77.8411% ( 15) 00:27:42.379 15.360 - 15.476: 77.9124% ( 6) 00:27:42.379 15.476 - 15.593: 77.9717% ( 5) 00:27:42.379 15.593 - 15.709: 78.0192% ( 4) 00:27:42.379 15.709 - 15.825: 78.0667% ( 4) 00:27:42.379 15.825 - 15.942: 78.1142% ( 4) 00:27:42.379 15.942 - 16.058: 78.1380% ( 2) 00:27:42.379 16.058 - 16.175: 78.1499% ( 1) 00:27:42.379 16.175 - 16.291: 78.2330% ( 7) 00:27:42.379 16.291 - 16.407: 78.6842% ( 38) 00:27:42.379 16.407 - 16.524: 80.4061% ( 145) 00:27:42.379 16.524 - 16.640: 82.9830% ( 217) 00:27:42.379 16.640 - 16.756: 85.0730% ( 176) 00:27:42.379 16.756 - 16.873: 86.1774% ( 93) 00:27:42.379 16.873 - 16.989: 87.4243% ( 105) 00:27:42.379 16.989 - 17.105: 89.5856% ( 182) 00:27:42.379 17.105 - 17.222: 91.3431% ( 148) 00:27:42.379 17.222 - 17.338: 92.2456% ( 76) 00:27:42.379 17.338 - 17.455: 92.9462% ( 59) 00:27:42.379 17.455 - 17.571: 93.3025% ( 30) 00:27:42.379 17.571 - 17.687: 93.5993% ( 25) 00:27:42.379 17.687 - 17.804: 93.8012% ( 17) 00:27:42.379 17.804 - 17.920: 93.9556% ( 13) 00:27:42.379 17.920 - 18.036: 94.0387% ( 7) 00:27:42.379 18.036 - 18.153: 94.1337% ( 8) 00:27:42.379 18.153 - 18.269: 94.2050% ( 6) 00:27:42.379 18.269 - 18.385: 94.2881% ( 7) 00:27:42.379 18.385 - 18.502: 94.3593% ( 6) 00:27:42.379 18.502 - 18.618: 94.4187% ( 5) 00:27:42.379 18.618 - 18.735: 94.4425% ( 2) 00:27:42.379 18.735 - 18.851: 94.4662% ( 2) 00:27:42.379 18.851 - 18.967: 94.5137% ( 4) 00:27:42.379 18.967 - 19.084: 94.5256% ( 1) 00:27:42.379 19.084 - 19.200: 94.5731% ( 4) 00:27:42.379 19.200 - 19.316: 94.6325% ( 5) 00:27:42.379 19.316 - 19.433: 94.6443% ( 1) 00:27:42.379 19.549 - 19.665: 94.6800% ( 3) 00:27:42.379 19.782 - 19.898: 94.6918% ( 1) 00:27:42.379 19.898 - 20.015: 94.7037% ( 1) 00:27:42.379 20.131 - 20.247: 94.7393% ( 3) 00:27:42.379 20.247 - 20.364: 94.7868% ( 4) 00:27:42.379 20.364 - 20.480: 94.8225% ( 3) 00:27:42.379 20.480 - 20.596: 94.8581% ( 3) 00:27:42.379 20.596 - 20.713: 94.9056% ( 4) 00:27:42.379 20.713 - 20.829: 94.9650% ( 5) 00:27:42.379 20.829 - 20.945: 94.9887% ( 2) 00:27:42.379 20.945 - 21.062: 95.0243% ( 3) 00:27:42.379 21.062 - 21.178: 95.0600% ( 3) 00:27:42.379 21.178 - 21.295: 95.1075% ( 4) 00:27:42.379 21.295 - 21.411: 95.1431% ( 3) 00:27:42.379 21.411 - 21.527: 95.1787% ( 3) 00:27:42.379 21.527 - 21.644: 95.2500% ( 6) 00:27:42.379 21.644 - 21.760: 95.2856% ( 3) 00:27:42.379 21.760 - 21.876: 95.3568% ( 6) 00:27:42.379 21.876 - 21.993: 95.3806% ( 2) 00:27:42.379 21.993 - 22.109: 95.4043% ( 2) 00:27:42.379 22.109 - 22.225: 95.4281% ( 2) 00:27:42.379 22.225 - 22.342: 95.4400% ( 1) 00:27:42.379 22.342 - 22.458: 95.4756% ( 3) 00:27:42.379 22.458 - 22.575: 95.4875% ( 1) 00:27:42.379 22.575 - 22.691: 95.4993% ( 1) 00:27:42.379 22.691 - 22.807: 95.5231% ( 2) 00:27:42.379 23.040 - 23.156: 95.5706% ( 4) 00:27:42.379 23.156 - 23.273: 95.5825% ( 1) 00:27:42.379 23.389 - 23.505: 95.6062% ( 2) 00:27:42.379 23.622 - 23.738: 95.6656% ( 5) 00:27:42.379 23.738 - 23.855: 95.7012% ( 3) 00:27:42.379 23.855 - 23.971: 95.7250% ( 2) 00:27:42.379 23.971 - 24.087: 95.7606% ( 3) 00:27:42.379 24.087 - 24.204: 95.7843% ( 2) 00:27:42.379 24.204 - 24.320: 95.8081% ( 2) 00:27:42.379 24.320 - 24.436: 95.8318% ( 2) 00:27:42.379 24.553 - 24.669: 95.8556% ( 2) 00:27:42.379 24.669 - 24.785: 95.8675% ( 1) 00:27:42.379 24.785 - 24.902: 95.8793% ( 1) 00:27:42.379 24.902 - 25.018: 95.9150% ( 3) 00:27:42.379 25.135 - 25.251: 95.9268% ( 1) 00:27:42.379 25.251 - 25.367: 95.9387% ( 1) 00:27:42.379 25.484 - 25.600: 95.9743% ( 3) 00:27:42.379 25.716 - 25.833: 95.9862% ( 1) 00:27:42.379 25.949 - 26.065: 96.0100% ( 2) 00:27:42.379 26.065 - 26.182: 96.0219% ( 1) 00:27:42.379 26.298 - 26.415: 96.0456% ( 2) 00:27:42.379 26.531 - 26.647: 96.0694% ( 2) 00:27:42.379 26.647 - 26.764: 96.0812% ( 1) 00:27:42.379 26.764 - 26.880: 96.1050% ( 2) 00:27:42.379 26.880 - 26.996: 96.1287% ( 2) 00:27:42.379 27.113 - 27.229: 96.1525% ( 2) 00:27:42.379 27.229 - 27.345: 96.1644% ( 1) 00:27:42.379 27.345 - 27.462: 96.1881% ( 2) 00:27:42.379 27.462 - 27.578: 96.2000% ( 1) 00:27:42.379 27.578 - 27.695: 96.2119% ( 1) 00:27:42.379 27.695 - 27.811: 96.2356% ( 2) 00:27:42.379 27.811 - 27.927: 96.2712% ( 3) 00:27:42.379 27.927 - 28.044: 96.3425% ( 6) 00:27:42.379 28.044 - 28.160: 96.4612% ( 10) 00:27:42.379 28.160 - 28.276: 96.6156% ( 13) 00:27:42.379 28.276 - 28.393: 96.9600% ( 29) 00:27:42.380 28.393 - 28.509: 97.2925% ( 28) 00:27:42.380 28.509 - 28.625: 97.6962% ( 34) 00:27:42.380 28.625 - 28.742: 98.1356% ( 37) 00:27:42.380 28.742 - 28.858: 98.5394% ( 34) 00:27:42.380 28.858 - 28.975: 98.8837% ( 29) 00:27:42.380 28.975 - 29.091: 99.1687% ( 24) 00:27:42.380 29.091 - 29.207: 99.3112% ( 12) 00:27:42.380 29.207 - 29.324: 99.4419% ( 11) 00:27:42.380 29.324 - 29.440: 99.5962% ( 13) 00:27:42.380 29.440 - 29.556: 99.6912% ( 8) 00:27:42.380 29.673 - 29.789: 99.7150% ( 2) 00:27:42.380 29.789 - 30.022: 99.7269% ( 1) 00:27:42.380 30.022 - 30.255: 99.7506% ( 2) 00:27:42.380 30.255 - 30.487: 99.7862% ( 3) 00:27:42.380 30.953 - 31.185: 99.8100% ( 2) 00:27:42.380 31.185 - 31.418: 99.8219% ( 1) 00:27:42.380 31.884 - 32.116: 99.8337% ( 1) 00:27:42.380 32.815 - 33.047: 99.8456% ( 1) 00:27:42.380 33.047 - 33.280: 99.8575% ( 1) 00:27:42.380 33.978 - 34.211: 99.8694% ( 1) 00:27:42.380 34.444 - 34.676: 99.8812% ( 1) 00:27:42.380 35.142 - 35.375: 99.8931% ( 1) 00:27:42.380 35.607 - 35.840: 99.9050% ( 1) 00:27:42.380 36.073 - 36.305: 99.9169% ( 1) 00:27:42.380 36.305 - 36.538: 99.9287% ( 1) 00:27:42.380 44.684 - 44.916: 99.9406% ( 1) 00:27:42.380 44.916 - 45.149: 99.9525% ( 1) 00:27:42.380 46.080 - 46.313: 99.9644% ( 1) 00:27:42.380 67.491 - 67.956: 99.9762% ( 1) 00:27:42.380 80.058 - 80.524: 99.9881% ( 1) 00:27:42.380 202.007 - 202.938: 100.0000% ( 1) 00:27:42.380 00:27:42.380 Complete histogram 00:27:42.380 ================== 00:27:42.380 Range in us Cumulative Count 00:27:42.380 8.145 - 8.204: 0.0238% ( 2) 00:27:42.380 8.204 - 8.262: 0.0594% ( 3) 00:27:42.380 8.262 - 8.320: 0.0950% ( 3) 00:27:42.380 8.320 - 8.378: 0.1306% ( 3) 00:27:42.380 8.378 - 8.436: 0.1781% ( 4) 00:27:42.380 8.436 - 8.495: 0.5581% ( 32) 00:27:42.380 8.495 - 8.553: 1.4963% ( 79) 00:27:42.380 8.553 - 8.611: 2.2088% ( 60) 00:27:42.380 8.611 - 8.669: 2.6363% ( 36) 00:27:42.380 8.669 - 8.727: 3.0756% ( 37) 00:27:42.380 8.727 - 8.785: 4.2275% ( 97) 00:27:42.380 8.785 - 8.844: 6.1513% ( 162) 00:27:42.380 8.844 - 8.902: 7.6238% ( 124) 00:27:42.380 8.902 - 8.960: 9.8088% ( 184) 00:27:42.380 8.960 - 9.018: 20.3420% ( 887) 00:27:42.380 9.018 - 9.076: 39.7459% ( 1634) 00:27:42.380 9.076 - 9.135: 55.4091% ( 1319) 00:27:42.380 9.135 - 9.193: 63.6266% ( 692) 00:27:42.380 9.193 - 9.251: 66.8329% ( 270) 00:27:42.380 9.251 - 9.309: 68.1036% ( 107) 00:27:42.380 9.309 - 9.367: 70.0036% ( 160) 00:27:42.380 9.367 - 9.425: 72.5448% ( 214) 00:27:42.380 9.425 - 9.484: 74.2073% ( 140) 00:27:42.380 9.484 - 9.542: 74.9792% ( 65) 00:27:42.380 9.542 - 9.600: 75.4305% ( 38) 00:27:42.380 9.600 - 9.658: 75.6798% ( 21) 00:27:42.380 9.658 - 9.716: 75.8105% ( 11) 00:27:42.380 9.716 - 9.775: 76.0005% ( 16) 00:27:42.380 9.775 - 9.833: 76.2499% ( 21) 00:27:42.380 9.833 - 9.891: 76.4636% ( 18) 00:27:42.380 9.891 - 9.949: 76.7842% ( 27) 00:27:42.380 9.949 - 10.007: 76.9742% ( 16) 00:27:42.380 10.007 - 10.065: 77.2355% ( 22) 00:27:42.380 10.065 - 10.124: 77.5680% ( 28) 00:27:42.380 10.124 - 10.182: 77.6867% ( 10) 00:27:42.380 10.182 - 10.240: 77.7699% ( 7) 00:27:42.380 10.240 - 10.298: 77.8055% ( 3) 00:27:42.380 10.298 - 10.356: 77.8411% ( 3) 00:27:42.380 10.356 - 10.415: 77.8530% ( 1) 00:27:42.380 10.589 - 10.647: 77.8649% ( 1) 00:27:42.380 10.647 - 10.705: 77.9124% ( 4) 00:27:42.380 10.705 - 10.764: 77.9361% ( 2) 00:27:42.380 10.764 - 10.822: 78.0074% ( 6) 00:27:42.380 10.822 - 10.880: 78.0430% ( 3) 00:27:42.380 10.880 - 10.938: 78.1855% ( 12) 00:27:42.380 10.938 - 10.996: 78.2686% ( 7) 00:27:42.380 10.996 - 11.055: 78.4942% ( 19) 00:27:42.380 11.055 - 11.113: 79.2899% ( 67) 00:27:42.380 11.113 - 11.171: 80.8574% ( 132) 00:27:42.380 11.171 - 11.229: 83.1374% ( 192) 00:27:42.380 11.229 - 11.287: 86.0349% ( 244) 00:27:42.380 11.287 - 11.345: 88.4218% ( 201) 00:27:42.380 11.345 - 11.404: 91.0106% ( 218) 00:27:42.380 11.404 - 11.462: 92.9818% ( 166) 00:27:42.380 11.462 - 11.520: 93.7062% ( 61) 00:27:42.380 11.520 - 11.578: 94.1218% ( 35) 00:27:42.380 11.578 - 11.636: 94.2643% ( 12) 00:27:42.380 11.636 - 11.695: 94.4068% ( 12) 00:27:42.380 11.695 - 11.753: 94.5256% ( 10) 00:27:42.380 11.753 - 11.811: 94.6443% ( 10) 00:27:42.380 11.811 - 11.869: 94.6918% ( 4) 00:27:42.380 11.869 - 11.927: 94.8106% ( 10) 00:27:42.380 11.927 - 11.985: 94.8343% ( 2) 00:27:42.380 11.985 - 12.044: 94.9056% ( 6) 00:27:42.380 12.044 - 12.102: 94.9768% ( 6) 00:27:42.380 12.102 - 12.160: 95.0362% ( 5) 00:27:42.380 12.160 - 12.218: 95.1075% ( 6) 00:27:42.380 12.218 - 12.276: 95.1550% ( 4) 00:27:42.380 12.276 - 12.335: 95.2262% ( 6) 00:27:42.380 12.335 - 12.393: 95.2618% ( 3) 00:27:42.380 12.393 - 12.451: 95.2975% ( 3) 00:27:42.380 12.451 - 12.509: 95.3093% ( 1) 00:27:42.380 12.509 - 12.567: 95.3331% ( 2) 00:27:42.380 12.567 - 12.625: 95.3450% ( 1) 00:27:42.380 12.684 - 12.742: 95.3806% ( 3) 00:27:42.380 12.742 - 12.800: 95.4043% ( 2) 00:27:42.380 12.800 - 12.858: 95.4162% ( 1) 00:27:42.380 12.858 - 12.916: 95.4518% ( 3) 00:27:42.380 12.916 - 12.975: 95.4637% ( 1) 00:27:42.380 12.975 - 13.033: 95.5112% ( 4) 00:27:42.380 13.033 - 13.091: 95.5350% ( 2) 00:27:42.380 13.091 - 13.149: 95.5587% ( 2) 00:27:42.380 13.149 - 13.207: 95.5706% ( 1) 00:27:42.380 13.207 - 13.265: 95.6062% ( 3) 00:27:42.380 13.265 - 13.324: 95.6300% ( 2) 00:27:42.380 13.324 - 13.382: 95.6537% ( 2) 00:27:42.380 13.382 - 13.440: 95.7012% ( 4) 00:27:42.380 13.498 - 13.556: 95.7131% ( 1) 00:27:42.380 13.615 - 13.673: 95.7250% ( 1) 00:27:42.380 13.789 - 13.847: 95.7725% ( 4) 00:27:42.380 13.905 - 13.964: 95.7843% ( 1) 00:27:42.380 13.964 - 14.022: 95.8081% ( 2) 00:27:42.380 14.022 - 14.080: 95.8200% ( 1) 00:27:42.380 14.080 - 14.138: 95.8556% ( 3) 00:27:42.380 14.138 - 14.196: 95.8675% ( 1) 00:27:42.380 14.196 - 14.255: 95.9150% ( 4) 00:27:42.380 14.371 - 14.429: 95.9506% ( 3) 00:27:42.380 14.429 - 14.487: 95.9743% ( 2) 00:27:42.380 14.545 - 14.604: 95.9862% ( 1) 00:27:42.380 14.604 - 14.662: 95.9981% ( 1) 00:27:42.380 14.836 - 14.895: 96.0100% ( 1) 00:27:42.380 14.895 - 15.011: 96.0694% ( 5) 00:27:42.380 15.011 - 15.127: 96.0931% ( 2) 00:27:42.380 15.127 - 15.244: 96.1050% ( 1) 00:27:42.380 15.244 - 15.360: 96.1287% ( 2) 00:27:42.380 15.360 - 15.476: 96.1525% ( 2) 00:27:42.380 15.476 - 15.593: 96.1644% ( 1) 00:27:42.380 15.593 - 15.709: 96.2356% ( 6) 00:27:42.380 15.825 - 15.942: 96.2475% ( 1) 00:27:42.380 15.942 - 16.058: 96.2831% ( 3) 00:27:42.380 16.058 - 16.175: 96.3069% ( 2) 00:27:42.380 16.175 - 16.291: 96.3187% ( 1) 00:27:42.380 16.291 - 16.407: 96.3425% ( 2) 00:27:42.380 16.407 - 16.524: 96.3900% ( 4) 00:27:42.380 16.524 - 16.640: 96.4137% ( 2) 00:27:42.380 16.640 - 16.756: 96.4375% ( 2) 00:27:42.380 16.756 - 16.873: 96.4494% ( 1) 00:27:42.380 16.873 - 16.989: 96.4731% ( 2) 00:27:42.380 16.989 - 17.105: 96.4969% ( 2) 00:27:42.380 17.222 - 17.338: 96.5087% ( 1) 00:27:42.380 17.338 - 17.455: 96.5206% ( 1) 00:27:42.380 17.571 - 17.687: 96.5562% ( 3) 00:27:42.380 17.687 - 17.804: 96.5681% ( 1) 00:27:42.380 17.920 - 18.036: 96.5800% ( 1) 00:27:42.380 18.036 - 18.153: 96.5919% ( 1) 00:27:42.380 18.153 - 18.269: 96.6037% ( 1) 00:27:42.380 18.269 - 18.385: 96.6275% ( 2) 00:27:42.380 18.385 - 18.502: 96.6394% ( 1) 00:27:42.380 18.618 - 18.735: 96.6869% ( 4) 00:27:42.380 18.735 - 18.851: 96.6987% ( 1) 00:27:42.380 18.851 - 18.967: 96.7106% ( 1) 00:27:42.380 18.967 - 19.084: 96.7225% ( 1) 00:27:42.380 19.084 - 19.200: 96.7581% ( 3) 00:27:42.380 19.316 - 19.433: 96.7937% ( 3) 00:27:42.380 19.433 - 19.549: 96.8056% ( 1) 00:27:42.380 19.665 - 19.782: 96.8294% ( 2) 00:27:42.380 19.782 - 19.898: 96.8769% ( 4) 00:27:42.380 19.898 - 20.015: 96.9006% ( 2) 00:27:42.380 20.015 - 20.131: 96.9244% ( 2) 00:27:42.380 20.131 - 20.247: 96.9362% ( 1) 00:27:42.380 20.247 - 20.364: 96.9600% ( 2) 00:27:42.380 20.364 - 20.480: 96.9719% ( 1) 00:27:42.380 20.596 - 20.713: 96.9837% ( 1) 00:27:42.380 21.178 - 21.295: 96.9956% ( 1) 00:27:42.380 21.993 - 22.109: 97.0075% ( 1) 00:27:42.380 22.225 - 22.342: 97.0194% ( 1) 00:27:42.380 22.342 - 22.458: 97.0669% ( 4) 00:27:42.380 22.458 - 22.575: 97.0787% ( 1) 00:27:42.380 22.691 - 22.807: 97.0906% ( 1) 00:27:42.380 22.924 - 23.040: 97.1381% ( 4) 00:27:42.380 23.040 - 23.156: 97.1856% ( 4) 00:27:42.380 23.156 - 23.273: 97.2331% ( 4) 00:27:42.380 23.273 - 23.389: 97.3281% ( 8) 00:27:42.381 23.389 - 23.505: 97.5775% ( 21) 00:27:42.381 23.505 - 23.622: 97.6962% ( 10) 00:27:42.381 23.622 - 23.738: 97.9694% ( 23) 00:27:42.381 23.738 - 23.855: 98.1475% ( 15) 00:27:42.381 23.855 - 23.971: 98.4325% ( 24) 00:27:42.381 23.971 - 24.087: 98.6344% ( 17) 00:27:42.381 24.087 - 24.204: 98.9787% ( 29) 00:27:42.381 24.204 - 24.320: 99.3112% ( 28) 00:27:42.381 24.320 - 24.436: 99.5369% ( 19) 00:27:42.381 24.436 - 24.553: 99.6319% ( 8) 00:27:42.381 24.553 - 24.669: 99.6675% ( 3) 00:27:42.381 24.669 - 24.785: 99.7031% ( 3) 00:27:42.381 24.785 - 24.902: 99.7150% ( 1) 00:27:42.381 25.018 - 25.135: 99.7269% ( 1) 00:27:42.381 25.135 - 25.251: 99.7387% ( 1) 00:27:42.381 25.367 - 25.484: 99.7744% ( 3) 00:27:42.381 25.600 - 25.716: 99.7981% ( 2) 00:27:42.381 25.716 - 25.833: 99.8100% ( 1) 00:27:42.381 27.113 - 27.229: 99.8219% ( 1) 00:27:42.381 28.044 - 28.160: 99.8337% ( 1) 00:27:42.381 29.324 - 29.440: 99.8456% ( 1) 00:27:42.381 29.556 - 29.673: 99.8575% ( 1) 00:27:42.381 30.487 - 30.720: 99.8694% ( 1) 00:27:42.381 31.651 - 31.884: 99.8812% ( 1) 00:27:42.381 32.815 - 33.047: 99.8931% ( 1) 00:27:42.381 33.047 - 33.280: 99.9050% ( 1) 00:27:42.381 33.978 - 34.211: 99.9169% ( 1) 00:27:42.381 34.909 - 35.142: 99.9287% ( 1) 00:27:42.381 35.375 - 35.607: 99.9406% ( 1) 00:27:42.381 37.236 - 37.469: 99.9525% ( 1) 00:27:42.381 37.702 - 37.935: 99.9644% ( 1) 00:27:42.381 105.193 - 105.658: 99.9762% ( 1) 00:27:42.381 135.913 - 136.844: 99.9881% ( 1) 00:27:42.381 144.291 - 145.222: 100.0000% ( 1) 00:27:42.381 00:27:42.381 00:27:42.381 real 0m1.277s 00:27:42.381 user 0m1.092s 00:27:42.381 sys 0m0.117s 00:27:42.381 21:24:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.381 21:24:04 -- common/autotest_common.sh@10 -- # set +x 00:27:42.381 ************************************ 00:27:42.381 END TEST nvme_overhead 00:27:42.381 ************************************ 00:27:42.381 21:24:04 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:27:42.381 21:24:04 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:27:42.381 21:24:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:42.381 21:24:04 -- common/autotest_common.sh@10 -- # set +x 00:27:42.381 ************************************ 00:27:42.381 START TEST nvme_arbitration 00:27:42.381 ************************************ 00:27:42.381 21:24:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:27:45.657 Initializing NVMe Controllers 00:27:45.657 Attached to 0000:00:06.0 00:27:45.657 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:27:45.657 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:27:45.657 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:27:45.657 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:27:45.657 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:27:45.657 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:27:45.657 Initialization complete. Launching workers. 00:27:45.657 Starting thread on core 1 with urgent priority queue 00:27:45.657 Starting thread on core 2 with urgent priority queue 00:27:45.657 Starting thread on core 3 with urgent priority queue 00:27:45.657 Starting thread on core 0 with urgent priority queue 00:27:45.657 QEMU NVMe Ctrl (12340 ) core 0: 7589.00 IO/s 13.18 secs/100000 ios 00:27:45.657 QEMU NVMe Ctrl (12340 ) core 1: 7660.33 IO/s 13.05 secs/100000 ios 00:27:45.657 QEMU NVMe Ctrl (12340 ) core 2: 4215.00 IO/s 23.72 secs/100000 ios 00:27:45.657 QEMU NVMe Ctrl (12340 ) core 3: 4056.00 IO/s 24.65 secs/100000 ios 00:27:45.657 ======================================================== 00:27:45.657 00:27:45.657 00:27:45.657 real 0m3.397s 00:27:45.657 user 0m9.173s 00:27:45.657 sys 0m0.181s 00:27:45.657 21:24:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.657 21:24:08 -- common/autotest_common.sh@10 -- # set +x 00:27:45.657 ************************************ 00:27:45.657 END TEST nvme_arbitration 00:27:45.657 ************************************ 00:27:45.657 21:24:08 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:27:45.657 21:24:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:27:45.657 21:24:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:45.657 21:24:08 -- common/autotest_common.sh@10 -- # set +x 00:27:45.657 ************************************ 00:27:45.657 START TEST nvme_single_aen 00:27:45.657 ************************************ 00:27:45.657 21:24:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:27:45.915 [2024-06-07 21:24:08.349811] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:45.915 [2024-06-07 21:24:08.350037] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.915 [2024-06-07 21:24:08.532743] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:45.915 Asynchronous Event Request test 00:27:45.915 Attached to 0000:00:06.0 00:27:45.915 Reset controller to setup AER completions for this process 00:27:45.915 Registering asynchronous event callbacks... 00:27:45.915 Getting orig temperature thresholds of all controllers 00:27:45.915 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:45.915 Setting all controllers temperature threshold low to trigger AER 00:27:45.915 Waiting for all controllers temperature threshold to be set lower 00:27:45.915 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:45.915 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:27:45.915 Waiting for all controllers to trigger AER and reset threshold 00:27:45.915 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:45.915 Cleaning up... 00:27:45.915 00:27:45.915 real 0m0.258s 00:27:45.915 user 0m0.066s 00:27:45.915 sys 0m0.111s 00:27:45.915 21:24:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.915 ************************************ 00:27:45.915 21:24:08 -- common/autotest_common.sh@10 -- # set +x 00:27:45.915 END TEST nvme_single_aen 00:27:45.915 ************************************ 00:27:46.173 21:24:08 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:27:46.173 21:24:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:46.173 21:24:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:46.173 21:24:08 -- common/autotest_common.sh@10 -- # set +x 00:27:46.173 ************************************ 00:27:46.174 START TEST nvme_doorbell_aers 00:27:46.174 ************************************ 00:27:46.174 21:24:08 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:27:46.174 21:24:08 -- nvme/nvme.sh@70 -- # bdfs=() 00:27:46.174 21:24:08 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:27:46.174 21:24:08 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:27:46.174 21:24:08 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:27:46.174 21:24:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:46.174 21:24:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:46.174 21:24:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:46.174 21:24:08 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:46.174 21:24:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:46.174 21:24:08 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:46.174 21:24:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:27:46.174 21:24:08 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:27:46.174 21:24:08 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:27:46.432 [2024-06-07 21:24:08.917265] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 152889) is not found. Dropping the request. 00:27:56.397 Executing: test_write_invalid_db 00:27:56.397 Waiting for AER completion... 00:27:56.398 Failure: test_write_invalid_db 00:27:56.398 00:27:56.398 Executing: test_invalid_db_write_overflow_sq 00:27:56.398 Waiting for AER completion... 00:27:56.398 Failure: test_invalid_db_write_overflow_sq 00:27:56.398 00:27:56.398 Executing: test_invalid_db_write_overflow_cq 00:27:56.398 Waiting for AER completion... 00:27:56.398 Failure: test_invalid_db_write_overflow_cq 00:27:56.398 00:27:56.398 00:27:56.398 real 0m10.106s 00:27:56.398 user 0m8.361s 00:27:56.398 sys 0m1.671s 00:27:56.398 21:24:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:56.398 ************************************ 00:27:56.398 END TEST nvme_doorbell_aers 00:27:56.398 ************************************ 00:27:56.398 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:27:56.398 21:24:18 -- nvme/nvme.sh@97 -- # uname 00:27:56.398 21:24:18 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:27:56.398 21:24:18 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:27:56.398 21:24:18 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:27:56.398 21:24:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:56.398 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:27:56.398 ************************************ 00:27:56.398 START TEST nvme_multi_aen 00:27:56.398 ************************************ 00:27:56.398 21:24:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:27:56.398 [2024-06-07 21:24:18.817511] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:56.398 [2024-06-07 21:24:18.817820] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.398 [2024-06-07 21:24:19.021663] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:56.398 [2024-06-07 21:24:19.021758] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 152889) is not found. Dropping the request. 00:27:56.398 [2024-06-07 21:24:19.021849] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 152889) is not found. Dropping the request. 00:27:56.398 [2024-06-07 21:24:19.021884] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 152889) is not found. Dropping the request. 00:27:56.398 [2024-06-07 21:24:19.029210] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:56.398 Child process pid: 153092 00:27:56.398 [2024-06-07 21:24:19.029466] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.657 [Child] Asynchronous Event Request test 00:27:56.657 [Child] Attached to 0000:00:06.0 00:27:56.657 [Child] Registering asynchronous event callbacks... 00:27:56.657 [Child] Getting orig temperature thresholds of all controllers 00:27:56.657 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:56.657 [Child] Waiting for all controllers to trigger AER and reset threshold 00:27:56.657 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:56.657 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:56.657 [Child] Cleaning up... 00:27:56.916 Asynchronous Event Request test 00:27:56.916 Attached to 0000:00:06.0 00:27:56.916 Reset controller to setup AER completions for this process 00:27:56.916 Registering asynchronous event callbacks... 00:27:56.916 Getting orig temperature thresholds of all controllers 00:27:56.916 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:56.916 Setting all controllers temperature threshold low to trigger AER 00:27:56.916 Waiting for all controllers temperature threshold to be set lower 00:27:56.916 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:56.916 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:27:56.916 Waiting for all controllers to trigger AER and reset threshold 00:27:56.916 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:56.916 Cleaning up... 00:27:56.916 00:27:56.916 real 0m0.567s 00:27:56.916 user 0m0.178s 00:27:56.916 sys 0m0.204s 00:27:56.916 21:24:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:56.916 ************************************ 00:27:56.916 END TEST nvme_multi_aen 00:27:56.916 ************************************ 00:27:56.916 21:24:19 -- common/autotest_common.sh@10 -- # set +x 00:27:56.916 21:24:19 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:27:56.916 21:24:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:27:56.916 21:24:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:56.916 21:24:19 -- common/autotest_common.sh@10 -- # set +x 00:27:56.916 ************************************ 00:27:56.916 START TEST nvme_startup 00:27:56.916 ************************************ 00:27:56.916 21:24:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:27:57.175 Initializing NVMe Controllers 00:27:57.175 Attached to 0000:00:06.0 00:27:57.175 Initialization complete. 00:27:57.175 Time used:181149.953 (us). 00:27:57.175 00:27:57.175 real 0m0.255s 00:27:57.175 user 0m0.087s 00:27:57.175 sys 0m0.108s 00:27:57.175 21:24:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.175 21:24:19 -- common/autotest_common.sh@10 -- # set +x 00:27:57.175 ************************************ 00:27:57.175 END TEST nvme_startup 00:27:57.175 ************************************ 00:27:57.175 21:24:19 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:27:57.175 21:24:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:57.175 21:24:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:57.175 21:24:19 -- common/autotest_common.sh@10 -- # set +x 00:27:57.175 ************************************ 00:27:57.175 START TEST nvme_multi_secondary 00:27:57.175 ************************************ 00:27:57.175 21:24:19 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:27:57.175 21:24:19 -- nvme/nvme.sh@52 -- # pid0=153158 00:27:57.175 21:24:19 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:27:57.175 21:24:19 -- nvme/nvme.sh@54 -- # pid1=153159 00:27:57.175 21:24:19 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:27:57.175 21:24:19 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:28:00.460 Initializing NVMe Controllers 00:28:00.460 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:00.460 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:28:00.460 Initialization complete. Launching workers. 00:28:00.460 ======================================================== 00:28:00.460 Latency(us) 00:28:00.460 Device Information : IOPS MiB/s Average min max 00:28:00.460 PCIE (0000:00:06.0) NSID 1 from core 2: 14279.66 55.78 1120.22 155.35 20643.05 00:28:00.460 ======================================================== 00:28:00.460 Total : 14279.66 55.78 1120.22 155.35 20643.05 00:28:00.460 00:28:00.460 21:24:23 -- nvme/nvme.sh@56 -- # wait 153158 00:28:00.724 Initializing NVMe Controllers 00:28:00.724 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:00.724 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:28:00.724 Initialization complete. Launching workers. 00:28:00.724 ======================================================== 00:28:00.724 Latency(us) 00:28:00.724 Device Information : IOPS MiB/s Average min max 00:28:00.724 PCIE (0000:00:06.0) NSID 1 from core 1: 33161.12 129.54 482.17 141.19 2708.05 00:28:00.724 ======================================================== 00:28:00.724 Total : 33161.12 129.54 482.17 141.19 2708.05 00:28:00.724 00:28:02.623 Initializing NVMe Controllers 00:28:02.623 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:02.623 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:02.623 Initialization complete. Launching workers. 00:28:02.623 ======================================================== 00:28:02.623 Latency(us) 00:28:02.623 Device Information : IOPS MiB/s Average min max 00:28:02.623 PCIE (0000:00:06.0) NSID 1 from core 0: 41222.40 161.03 387.81 114.82 2514.45 00:28:02.623 ======================================================== 00:28:02.623 Total : 41222.40 161.03 387.81 114.82 2514.45 00:28:02.623 00:28:02.623 21:24:25 -- nvme/nvme.sh@57 -- # wait 153159 00:28:02.623 21:24:25 -- nvme/nvme.sh@61 -- # pid0=153246 00:28:02.623 21:24:25 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:28:02.623 21:24:25 -- nvme/nvme.sh@63 -- # pid1=153247 00:28:02.623 21:24:25 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:28:02.623 21:24:25 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:28:05.931 Initializing NVMe Controllers 00:28:05.931 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:05.931 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:28:05.931 Initialization complete. Launching workers. 00:28:05.931 ======================================================== 00:28:05.931 Latency(us) 00:28:05.931 Device Information : IOPS MiB/s Average min max 00:28:05.931 PCIE (0000:00:06.0) NSID 1 from core 1: 28610.78 111.76 558.88 152.53 2981.26 00:28:05.931 ======================================================== 00:28:05.931 Total : 28610.78 111.76 558.88 152.53 2981.26 00:28:05.931 00:28:06.189 Initializing NVMe Controllers 00:28:06.189 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:06.189 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:06.189 Initialization complete. Launching workers. 00:28:06.189 ======================================================== 00:28:06.189 Latency(us) 00:28:06.189 Device Information : IOPS MiB/s Average min max 00:28:06.189 PCIE (0000:00:06.0) NSID 1 from core 0: 28795.10 112.48 555.32 167.42 1972.17 00:28:06.189 ======================================================== 00:28:06.189 Total : 28795.10 112.48 555.32 167.42 1972.17 00:28:06.189 00:28:08.086 Initializing NVMe Controllers 00:28:08.086 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:08.086 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:28:08.086 Initialization complete. Launching workers. 00:28:08.086 ======================================================== 00:28:08.086 Latency(us) 00:28:08.086 Device Information : IOPS MiB/s Average min max 00:28:08.086 PCIE (0000:00:06.0) NSID 1 from core 2: 16058.46 62.73 995.84 143.86 20982.59 00:28:08.086 ======================================================== 00:28:08.086 Total : 16058.46 62.73 995.84 143.86 20982.59 00:28:08.086 00:28:08.086 21:24:30 -- nvme/nvme.sh@65 -- # wait 153246 00:28:08.086 21:24:30 -- nvme/nvme.sh@66 -- # wait 153247 00:28:08.086 00:28:08.086 real 0m10.767s 00:28:08.086 user 0m18.613s 00:28:08.086 sys 0m0.687s 00:28:08.086 21:24:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:08.086 ************************************ 00:28:08.086 END TEST nvme_multi_secondary 00:28:08.086 21:24:30 -- common/autotest_common.sh@10 -- # set +x 00:28:08.086 ************************************ 00:28:08.086 21:24:30 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:28:08.086 21:24:30 -- nvme/nvme.sh@102 -- # kill_stub 00:28:08.086 21:24:30 -- common/autotest_common.sh@1065 -- # [[ -e /proc/152450 ]] 00:28:08.086 21:24:30 -- common/autotest_common.sh@1066 -- # kill 152450 00:28:08.086 21:24:30 -- common/autotest_common.sh@1067 -- # wait 152450 00:28:09.020 [2024-06-07 21:24:31.384943] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 153091) is not found. Dropping the request. 00:28:09.020 [2024-06-07 21:24:31.385128] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 153091) is not found. Dropping the request. 00:28:09.020 [2024-06-07 21:24:31.385244] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 153091) is not found. Dropping the request. 00:28:09.020 [2024-06-07 21:24:31.385347] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 153091) is not found. Dropping the request. 00:28:09.020 21:24:31 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:28:09.020 21:24:31 -- common/autotest_common.sh@1073 -- # echo 2 00:28:09.020 21:24:31 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:28:09.020 21:24:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:09.020 21:24:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:09.020 21:24:31 -- common/autotest_common.sh@10 -- # set +x 00:28:09.020 ************************************ 00:28:09.020 START TEST bdev_nvme_reset_stuck_adm_cmd 00:28:09.020 ************************************ 00:28:09.020 21:24:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:28:09.020 * Looking for test storage... 00:28:09.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:09.020 21:24:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:28:09.020 21:24:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:28:09.020 21:24:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:28:09.020 21:24:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:28:09.020 21:24:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:28:09.020 21:24:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:28:09.020 21:24:31 -- common/autotest_common.sh@1509 -- # bdfs=() 00:28:09.020 21:24:31 -- common/autotest_common.sh@1509 -- # local bdfs 00:28:09.020 21:24:31 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:28:09.020 21:24:31 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:28:09.020 21:24:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:09.020 21:24:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:09.020 21:24:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:09.020 21:24:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:09.020 21:24:31 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:09.020 21:24:31 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:28:09.020 21:24:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:28:09.020 21:24:31 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:28:09.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.020 21:24:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:28:09.020 21:24:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:28:09.020 21:24:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=153417 00:28:09.020 21:24:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:28:09.020 21:24:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:09.020 21:24:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 153417 00:28:09.020 21:24:31 -- common/autotest_common.sh@819 -- # '[' -z 153417 ']' 00:28:09.020 21:24:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.020 21:24:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:09.021 21:24:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.021 21:24:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:09.021 21:24:31 -- common/autotest_common.sh@10 -- # set +x 00:28:09.279 [2024-06-07 21:24:31.698779] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:09.279 [2024-06-07 21:24:31.699119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153417 ] 00:28:09.279 [2024-06-07 21:24:31.898909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:09.537 [2024-06-07 21:24:31.970858] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:09.537 [2024-06-07 21:24:31.971250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.537 [2024-06-07 21:24:31.971380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.537 [2024-06-07 21:24:31.972124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.537 [2024-06-07 21:24:31.972094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.104 21:24:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:10.104 21:24:32 -- common/autotest_common.sh@852 -- # return 0 00:28:10.104 21:24:32 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:28:10.104 21:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.104 21:24:32 -- common/autotest_common.sh@10 -- # set +x 00:28:10.104 nvme0n1 00:28:10.104 21:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.104 21:24:32 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:28:10.104 21:24:32 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_hUCgx.txt 00:28:10.104 21:24:32 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:28:10.104 21:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.104 21:24:32 -- common/autotest_common.sh@10 -- # set +x 00:28:10.104 true 00:28:10.104 21:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.104 21:24:32 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:28:10.104 21:24:32 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1717795472 00:28:10.104 21:24:32 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=153442 00:28:10.104 21:24:32 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:10.104 21:24:32 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:28:10.104 21:24:32 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:12.635 21:24:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.635 21:24:34 -- common/autotest_common.sh@10 -- # set +x 00:28:12.635 [2024-06-07 21:24:34.690593] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:12.635 [2024-06-07 21:24:34.691012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.635 [2024-06-07 21:24:34.691138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:28:12.635 [2024-06-07 21:24:34.691186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.635 [2024-06-07 21:24:34.693042] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:12.635 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 153442 00:28:12.635 21:24:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 153442 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 153442 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.635 21:24:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.635 21:24:34 -- common/autotest_common.sh@10 -- # set +x 00:28:12.635 21:24:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_hUCgx.txt 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_hUCgx.txt 00:28:12.635 21:24:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 153417 00:28:12.635 21:24:34 -- common/autotest_common.sh@926 -- # '[' -z 153417 ']' 00:28:12.635 21:24:34 -- common/autotest_common.sh@930 -- # kill -0 153417 00:28:12.635 21:24:34 -- common/autotest_common.sh@931 -- # uname 00:28:12.635 21:24:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:12.635 21:24:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 153417 00:28:12.635 killing process with pid 153417 00:28:12.635 21:24:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:12.635 21:24:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:12.635 21:24:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 153417' 00:28:12.635 21:24:34 -- common/autotest_common.sh@945 -- # kill 153417 00:28:12.635 21:24:34 -- common/autotest_common.sh@950 -- # wait 153417 00:28:12.635 21:24:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:28:12.635 21:24:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:28:12.635 00:28:12.635 real 0m3.777s 00:28:12.635 user 0m13.473s 00:28:12.635 sys 0m0.583s 00:28:12.635 21:24:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:12.635 ************************************ 00:28:12.635 END TEST bdev_nvme_reset_stuck_adm_cmd 00:28:12.635 21:24:35 -- common/autotest_common.sh@10 -- # set +x 00:28:12.635 ************************************ 00:28:12.895 21:24:35 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:28:12.895 21:24:35 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:28:12.895 21:24:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:12.895 21:24:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:12.895 21:24:35 -- common/autotest_common.sh@10 -- # set +x 00:28:12.895 ************************************ 00:28:12.895 START TEST nvme_fio 00:28:12.895 ************************************ 00:28:12.895 21:24:35 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:28:12.895 21:24:35 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:28:12.895 21:24:35 -- nvme/nvme.sh@32 -- # ran_fio=false 00:28:12.895 21:24:35 -- nvme/nvme.sh@33 -- # bdfs=($(get_nvme_bdfs)) 00:28:12.895 21:24:35 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:28:12.895 21:24:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:12.895 21:24:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:12.895 21:24:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:12.895 21:24:35 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:12.895 21:24:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:12.895 21:24:35 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:28:12.895 21:24:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:28:12.895 21:24:35 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:28:12.895 21:24:35 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:28:12.895 21:24:35 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:28:12.895 21:24:35 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:28:13.154 21:24:35 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:28:13.154 21:24:35 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:28:13.424 21:24:35 -- nvme/nvme.sh@41 -- # bs=4096 00:28:13.424 21:24:35 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:28:13.424 21:24:35 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:28:13.424 21:24:35 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:13.424 21:24:35 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:28:13.424 21:24:35 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:13.424 21:24:35 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:13.424 21:24:35 -- common/autotest_common.sh@1320 -- # shift 00:28:13.424 21:24:35 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:13.424 21:24:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:13.424 21:24:35 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:13.424 21:24:35 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:13.424 21:24:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:13.424 21:24:35 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:28:13.424 21:24:35 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:28:13.424 21:24:35 -- common/autotest_common.sh@1326 -- # break 00:28:13.424 21:24:35 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:28:13.424 21:24:35 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:28:13.424 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:13.424 fio-3.35 00:28:13.424 Starting 1 thread 00:28:17.625 00:28:17.625 test: (groupid=0, jobs=1): err= 0: pid=153570: Fri Jun 7 21:24:39 2024 00:28:17.625 read: IOPS=16.5k, BW=64.3MiB/s (67.4MB/s)(129MiB/2001msec) 00:28:17.625 slat (nsec): min=3912, max=98700, avg=6088.90, stdev=2041.60 00:28:17.625 clat (usec): min=253, max=8741, avg=3856.11, stdev=634.39 00:28:17.625 lat (usec): min=258, max=8747, avg=3862.20, stdev=635.19 00:28:17.625 clat percentiles (usec): 00:28:17.625 | 1.00th=[ 2933], 5.00th=[ 3130], 10.00th=[ 3195], 20.00th=[ 3326], 00:28:17.625 | 30.00th=[ 3425], 40.00th=[ 3589], 50.00th=[ 3916], 60.00th=[ 4047], 00:28:17.625 | 70.00th=[ 4146], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4621], 00:28:17.625 | 99.00th=[ 6587], 99.50th=[ 7308], 99.90th=[ 7832], 99.95th=[ 7898], 00:28:17.625 | 99.99th=[ 8291] 00:28:17.625 bw ( KiB/s): min=64816, max=70459, per=100.00%, avg=67385.00, stdev=2855.19, samples=3 00:28:17.625 iops : min=16204, max=17614, avg=16846.00, stdev=713.39, samples=3 00:28:17.625 write: IOPS=16.5k, BW=64.4MiB/s (67.6MB/s)(129MiB/2001msec); 0 zone resets 00:28:17.625 slat (nsec): min=4170, max=56942, avg=6272.77, stdev=2085.12 00:28:17.625 clat (usec): min=274, max=12025, avg=3890.04, stdev=703.94 00:28:17.625 lat (usec): min=280, max=12029, avg=3896.31, stdev=704.72 00:28:17.625 clat percentiles (usec): 00:28:17.625 | 1.00th=[ 2966], 5.00th=[ 3130], 10.00th=[ 3228], 20.00th=[ 3326], 00:28:17.625 | 30.00th=[ 3458], 40.00th=[ 3621], 50.00th=[ 3916], 60.00th=[ 4047], 00:28:17.625 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4686], 00:28:17.625 | 99.00th=[ 7046], 99.50th=[ 7504], 99.90th=[10028], 99.95th=[10421], 00:28:17.625 | 99.99th=[11731] 00:28:17.625 bw ( KiB/s): min=64664, max=70483, per=100.00%, avg=67283.67, stdev=2952.49, samples=3 00:28:17.625 iops : min=16166, max=17620, avg=16820.67, stdev=737.72, samples=3 00:28:17.625 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.02% 00:28:17.625 lat (msec) : 2=0.18%, 4=56.14%, 10=43.58%, 20=0.06% 00:28:17.625 cpu : usr=99.65%, sys=0.20%, ctx=27, majf=0, minf=39 00:28:17.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:17.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:17.625 issued rwts: total=32918,33000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:17.625 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:17.625 00:28:17.625 Run status group 0 (all jobs): 00:28:17.625 READ: bw=64.3MiB/s (67.4MB/s), 64.3MiB/s-64.3MiB/s (67.4MB/s-67.4MB/s), io=129MiB (135MB), run=2001-2001msec 00:28:17.625 WRITE: bw=64.4MiB/s (67.6MB/s), 64.4MiB/s-64.4MiB/s (67.6MB/s-67.6MB/s), io=129MiB (135MB), run=2001-2001msec 00:28:17.625 ----------------------------------------------------- 00:28:17.625 Suppressions used: 00:28:17.625 count bytes template 00:28:17.625 1 32 /usr/src/fio/parse.c 00:28:17.625 ----------------------------------------------------- 00:28:17.625 00:28:17.625 21:24:40 -- nvme/nvme.sh@44 -- # ran_fio=true 00:28:17.625 21:24:40 -- nvme/nvme.sh@46 -- # true 00:28:17.625 00:28:17.625 real 0m4.945s 00:28:17.625 user 0m3.838s 00:28:17.625 sys 0m1.463s 00:28:17.625 ************************************ 00:28:17.625 END TEST nvme_fio 00:28:17.625 ************************************ 00:28:17.625 21:24:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:17.625 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:17.884 00:28:17.884 real 0m45.932s 00:28:17.884 user 1m57.626s 00:28:17.884 sys 0m9.408s 00:28:17.884 21:24:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:17.884 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:17.884 ************************************ 00:28:17.884 END TEST nvme 00:28:17.884 ************************************ 00:28:17.884 21:24:40 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:28:17.884 21:24:40 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:28:17.884 21:24:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:17.884 21:24:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:17.884 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:17.884 ************************************ 00:28:17.884 START TEST nvme_scc 00:28:17.884 ************************************ 00:28:17.884 21:24:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:28:17.884 * Looking for test storage... 00:28:17.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:17.884 21:24:40 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:28:17.884 21:24:40 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:28:17.884 21:24:40 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:28:17.884 21:24:40 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:17.884 21:24:40 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:17.884 21:24:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.884 21:24:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.884 21:24:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.884 21:24:40 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:17.884 21:24:40 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:17.884 21:24:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:17.884 21:24:40 -- paths/export.sh@5 -- # export PATH 00:28:17.884 21:24:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:17.884 21:24:40 -- nvme/functions.sh@10 -- # ctrls=() 00:28:17.884 21:24:40 -- nvme/functions.sh@10 -- # declare -A ctrls 00:28:17.884 21:24:40 -- nvme/functions.sh@11 -- # nvmes=() 00:28:17.884 21:24:40 -- nvme/functions.sh@11 -- # declare -A nvmes 00:28:17.884 21:24:40 -- nvme/functions.sh@12 -- # bdfs=() 00:28:17.884 21:24:40 -- nvme/functions.sh@12 -- # declare -A bdfs 00:28:17.884 21:24:40 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:28:17.884 21:24:40 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:28:17.884 21:24:40 -- nvme/functions.sh@14 -- # nvme_name= 00:28:17.884 21:24:40 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:17.884 21:24:40 -- nvme/nvme_scc.sh@12 -- # uname 00:28:17.884 21:24:40 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:28:17.884 21:24:40 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:28:17.884 21:24:40 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:18.142 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:18.142 Waiting for block devices as requested 00:28:18.142 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:18.402 21:24:40 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:28:18.402 21:24:40 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:28:18.402 21:24:40 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:28:18.402 21:24:40 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:28:18.402 21:24:40 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:28:18.402 21:24:40 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:28:18.402 21:24:40 -- scripts/common.sh@15 -- # local i 00:28:18.402 21:24:40 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:28:18.402 21:24:40 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:18.402 21:24:40 -- scripts/common.sh@24 -- # return 0 00:28:18.402 21:24:40 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:28:18.402 21:24:40 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:28:18.402 21:24:40 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:28:18.402 21:24:40 -- nvme/functions.sh@18 -- # shift 00:28:18.402 21:24:40 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:28:18.402 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.402 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.402 21:24:40 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:28:18.402 21:24:40 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:18.402 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.402 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.402 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:28:18.402 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:28:18.402 21:24:40 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:28:18.402 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.402 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.402 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:28:18.403 21:24:40 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.403 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.403 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.404 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.404 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:28:18.404 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:28:18.405 21:24:40 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:28:18.405 21:24:40 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:28:18.405 21:24:40 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:28:18.405 21:24:40 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@18 -- # shift 00:28:18.405 21:24:40 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.405 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.405 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.405 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.406 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.406 21:24:40 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:28:18.406 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:28:18.407 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.407 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.407 21:24:40 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:28:18.407 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:28:18.407 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:28:18.407 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.407 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.407 21:24:40 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:28:18.407 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:28:18.407 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:28:18.407 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.407 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.407 21:24:40 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:28:18.407 21:24:40 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:28:18.407 21:24:40 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:28:18.407 21:24:40 -- nvme/functions.sh@21 -- # IFS=: 00:28:18.407 21:24:40 -- nvme/functions.sh@21 -- # read -r reg val 00:28:18.407 21:24:40 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:28:18.407 21:24:40 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:28:18.407 21:24:40 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:28:18.407 21:24:40 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:28:18.407 21:24:40 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:28:18.407 21:24:40 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:28:18.407 21:24:40 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:28:18.407 21:24:40 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:28:18.407 21:24:40 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:28:18.407 21:24:40 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:28:18.407 21:24:40 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:28:18.407 21:24:40 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:28:18.407 21:24:40 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:28:18.407 21:24:40 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:28:18.407 21:24:40 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:28:18.407 21:24:40 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:28:18.407 21:24:40 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:28:18.407 21:24:40 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:28:18.407 21:24:40 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:28:18.407 21:24:40 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:28:18.407 21:24:40 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:28:18.407 21:24:40 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:28:18.407 21:24:40 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:28:18.407 21:24:40 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:28:18.407 21:24:40 -- nvme/functions.sh@76 -- # echo 0x15d 00:28:18.407 21:24:40 -- nvme/functions.sh@184 -- # oncs=0x15d 00:28:18.407 21:24:40 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:28:18.407 21:24:40 -- nvme/functions.sh@197 -- # echo nvme0 00:28:18.407 21:24:40 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:28:18.407 21:24:40 -- nvme/functions.sh@206 -- # echo nvme0 00:28:18.407 21:24:40 -- nvme/functions.sh@207 -- # return 0 00:28:18.407 21:24:40 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:28:18.407 21:24:40 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:28:18.407 21:24:40 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:18.665 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:18.923 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:28:19.858 21:24:42 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:28:19.858 21:24:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:28:19.858 21:24:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:19.858 21:24:42 -- common/autotest_common.sh@10 -- # set +x 00:28:19.858 ************************************ 00:28:19.858 START TEST nvme_simple_copy 00:28:19.858 ************************************ 00:28:19.858 21:24:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:28:20.116 Initializing NVMe Controllers 00:28:20.116 Attaching to 0000:00:06.0 00:28:20.116 Controller supports SCC. Attached to 0000:00:06.0 00:28:20.116 Namespace ID: 1 size: 5GB 00:28:20.116 Initialization complete. 00:28:20.116 00:28:20.116 Controller QEMU NVMe Ctrl (12340 ) 00:28:20.116 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:28:20.116 Namespace Block Size:4096 00:28:20.116 Writing LBAs 0 to 63 with Random Data 00:28:20.116 Copied LBAs from 0 - 63 to the Destination LBA 256 00:28:20.117 LBAs matching Written Data: 64 00:28:20.117 00:28:20.117 real 0m0.263s 00:28:20.117 user 0m0.100s 00:28:20.117 sys 0m0.064s 00:28:20.117 21:24:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:20.117 21:24:42 -- common/autotest_common.sh@10 -- # set +x 00:28:20.117 ************************************ 00:28:20.117 END TEST nvme_simple_copy 00:28:20.117 ************************************ 00:28:20.117 00:28:20.117 real 0m2.402s 00:28:20.117 user 0m0.641s 00:28:20.117 sys 0m1.619s 00:28:20.117 21:24:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:20.117 21:24:42 -- common/autotest_common.sh@10 -- # set +x 00:28:20.117 ************************************ 00:28:20.117 END TEST nvme_scc 00:28:20.117 ************************************ 00:28:20.375 21:24:42 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:28:20.375 21:24:42 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:28:20.375 21:24:42 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:28:20.375 21:24:42 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:28:20.375 21:24:42 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:28:20.375 21:24:42 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:28:20.375 21:24:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:20.375 21:24:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:20.375 21:24:42 -- common/autotest_common.sh@10 -- # set +x 00:28:20.375 ************************************ 00:28:20.375 START TEST nvme_rpc 00:28:20.375 ************************************ 00:28:20.375 21:24:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:28:20.375 * Looking for test storage... 00:28:20.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:20.375 21:24:42 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:20.375 21:24:42 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:28:20.375 21:24:42 -- common/autotest_common.sh@1509 -- # bdfs=() 00:28:20.375 21:24:42 -- common/autotest_common.sh@1509 -- # local bdfs 00:28:20.375 21:24:42 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:28:20.375 21:24:42 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:28:20.375 21:24:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:20.375 21:24:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:20.375 21:24:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:20.375 21:24:42 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:20.375 21:24:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:20.375 21:24:42 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:28:20.375 21:24:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:28:20.375 21:24:42 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:28:20.375 21:24:42 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:28:20.375 21:24:42 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=154063 00:28:20.375 21:24:42 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:28:20.375 21:24:42 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:28:20.375 21:24:42 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 154063 00:28:20.375 21:24:42 -- common/autotest_common.sh@819 -- # '[' -z 154063 ']' 00:28:20.375 21:24:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.375 21:24:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:20.375 21:24:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.375 21:24:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:20.375 21:24:42 -- common/autotest_common.sh@10 -- # set +x 00:28:20.375 [2024-06-07 21:24:43.032343] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:20.375 [2024-06-07 21:24:43.032587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154063 ] 00:28:20.632 [2024-06-07 21:24:43.202393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:20.632 [2024-06-07 21:24:43.280422] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:20.632 [2024-06-07 21:24:43.280969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.633 [2024-06-07 21:24:43.280972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.564 21:24:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:21.564 21:24:43 -- common/autotest_common.sh@852 -- # return 0 00:28:21.564 21:24:43 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:28:21.828 Nvme0n1 00:28:21.828 21:24:44 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:28:21.828 21:24:44 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:28:21.829 request: 00:28:21.829 { 00:28:21.829 "filename": "non_existing_file", 00:28:21.829 "bdev_name": "Nvme0n1", 00:28:21.829 "method": "bdev_nvme_apply_firmware", 00:28:21.829 "req_id": 1 00:28:21.829 } 00:28:21.829 Got JSON-RPC error response 00:28:21.829 response: 00:28:21.829 { 00:28:21.829 "code": -32603, 00:28:21.829 "message": "open file failed." 00:28:21.829 } 00:28:21.829 21:24:44 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:28:21.829 21:24:44 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:28:21.829 21:24:44 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:28:22.087 21:24:44 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:28:22.087 21:24:44 -- nvme/nvme_rpc.sh@40 -- # killprocess 154063 00:28:22.087 21:24:44 -- common/autotest_common.sh@926 -- # '[' -z 154063 ']' 00:28:22.087 21:24:44 -- common/autotest_common.sh@930 -- # kill -0 154063 00:28:22.087 21:24:44 -- common/autotest_common.sh@931 -- # uname 00:28:22.087 21:24:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:22.087 21:24:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 154063 00:28:22.087 21:24:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:22.087 killing process with pid 154063 00:28:22.087 21:24:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:22.087 21:24:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 154063' 00:28:22.087 21:24:44 -- common/autotest_common.sh@945 -- # kill 154063 00:28:22.087 21:24:44 -- common/autotest_common.sh@950 -- # wait 154063 00:28:22.654 00:28:22.654 real 0m2.360s 00:28:22.654 user 0m4.700s 00:28:22.654 sys 0m0.585s 00:28:22.654 21:24:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:22.654 ************************************ 00:28:22.654 END TEST nvme_rpc 00:28:22.654 ************************************ 00:28:22.654 21:24:45 -- common/autotest_common.sh@10 -- # set +x 00:28:22.654 21:24:45 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:28:22.654 21:24:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:22.654 21:24:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:22.654 21:24:45 -- common/autotest_common.sh@10 -- # set +x 00:28:22.654 ************************************ 00:28:22.654 START TEST nvme_rpc_timeouts 00:28:22.654 ************************************ 00:28:22.654 21:24:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:28:22.654 * Looking for test storage... 00:28:22.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:22.654 21:24:45 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:22.654 21:24:45 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_154131 00:28:22.654 21:24:45 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_154131 00:28:22.654 21:24:45 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=154155 00:28:22.654 21:24:45 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:28:22.654 21:24:45 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 154155 00:28:22.654 21:24:45 -- common/autotest_common.sh@819 -- # '[' -z 154155 ']' 00:28:22.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.654 21:24:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.654 21:24:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:22.654 21:24:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.654 21:24:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:22.654 21:24:45 -- common/autotest_common.sh@10 -- # set +x 00:28:22.654 21:24:45 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:28:22.913 [2024-06-07 21:24:45.379812] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:22.913 [2024-06-07 21:24:45.380275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154155 ] 00:28:22.913 [2024-06-07 21:24:45.537369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:23.170 [2024-06-07 21:24:45.614287] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:23.170 [2024-06-07 21:24:45.614690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.170 [2024-06-07 21:24:45.614698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.735 21:24:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:23.735 21:24:46 -- common/autotest_common.sh@852 -- # return 0 00:28:23.735 21:24:46 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:28:23.735 Checking default timeout settings: 00:28:23.735 21:24:46 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:23.993 Making settings changes with rpc: 00:28:23.993 21:24:46 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:28:23.993 21:24:46 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:28:24.250 21:24:46 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:28:24.250 Check default vs. modified settings: 00:28:24.250 21:24:46 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_154131 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_154131 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:28:24.507 Setting action_on_timeout is changed as expected. 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_154131 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_154131 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:28:24.507 21:24:47 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:28:24.508 Setting timeout_us is changed as expected. 00:28:24.508 21:24:47 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:28:24.508 21:24:47 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_154131 00:28:24.508 21:24:47 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:28:24.508 21:24:47 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:24.508 21:24:47 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:28:24.508 21:24:47 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_154131 00:28:24.508 21:24:47 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:28:24.508 21:24:47 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:28:24.766 21:24:47 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:28:24.766 Setting timeout_admin_us is changed as expected. 00:28:24.766 21:24:47 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:28:24.766 21:24:47 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:28:24.766 21:24:47 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:28:24.766 21:24:47 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_154131 /tmp/settings_modified_154131 00:28:24.766 21:24:47 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 154155 00:28:24.766 21:24:47 -- common/autotest_common.sh@926 -- # '[' -z 154155 ']' 00:28:24.766 21:24:47 -- common/autotest_common.sh@930 -- # kill -0 154155 00:28:24.766 21:24:47 -- common/autotest_common.sh@931 -- # uname 00:28:24.766 21:24:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:24.766 21:24:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 154155 00:28:24.766 21:24:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:24.766 killing process with pid 154155 00:28:24.766 21:24:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:24.766 21:24:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 154155' 00:28:24.766 21:24:47 -- common/autotest_common.sh@945 -- # kill 154155 00:28:24.766 21:24:47 -- common/autotest_common.sh@950 -- # wait 154155 00:28:25.024 RPC TIMEOUT SETTING TEST PASSED. 00:28:25.025 21:24:47 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:28:25.025 00:28:25.025 real 0m2.397s 00:28:25.025 user 0m4.831s 00:28:25.025 sys 0m0.563s 00:28:25.025 21:24:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:25.025 ************************************ 00:28:25.025 END TEST nvme_rpc_timeouts 00:28:25.025 21:24:47 -- common/autotest_common.sh@10 -- # set +x 00:28:25.025 ************************************ 00:28:25.025 21:24:47 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:28:25.025 21:24:47 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:28:25.025 21:24:47 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:28:25.025 21:24:47 -- spdk/autotest.sh@268 -- # timing_exit lib 00:28:25.025 21:24:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:25.025 21:24:47 -- common/autotest_common.sh@10 -- # set +x 00:28:25.283 21:24:47 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:28:25.283 21:24:47 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:28:25.283 21:24:47 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:28:25.283 21:24:47 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:28:25.283 21:24:47 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:28:25.283 21:24:47 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:28:25.283 21:24:47 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:28:25.283 21:24:47 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:28:25.283 21:24:47 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:28:25.283 21:24:47 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:25.283 21:24:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:25.283 21:24:47 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:28:25.283 21:24:47 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:25.283 21:24:47 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:28:25.283 21:24:47 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:25.283 21:24:47 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:25.283 21:24:47 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:28:25.283 21:24:47 -- spdk/autotest.sh@378 -- # [[ 1 -eq 1 ]] 00:28:25.283 21:24:47 -- spdk/autotest.sh@379 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:28:25.284 21:24:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:25.284 21:24:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:25.284 21:24:47 -- common/autotest_common.sh@10 -- # set +x 00:28:25.284 ************************************ 00:28:25.284 START TEST blockdev_raid5f 00:28:25.284 ************************************ 00:28:25.284 21:24:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:28:25.284 * Looking for test storage... 00:28:25.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:25.284 21:24:47 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:25.284 21:24:47 -- bdev/nbd_common.sh@6 -- # set -e 00:28:25.284 21:24:47 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:25.284 21:24:47 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:25.284 21:24:47 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:25.284 21:24:47 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:25.284 21:24:47 -- bdev/blockdev.sh@18 -- # : 00:28:25.284 21:24:47 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:28:25.284 21:24:47 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:28:25.284 21:24:47 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:28:25.284 21:24:47 -- bdev/blockdev.sh@672 -- # uname -s 00:28:25.284 21:24:47 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:28:25.284 21:24:47 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:28:25.284 21:24:47 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:28:25.284 21:24:47 -- bdev/blockdev.sh@681 -- # crypto_device= 00:28:25.284 21:24:47 -- bdev/blockdev.sh@682 -- # dek= 00:28:25.284 21:24:47 -- bdev/blockdev.sh@683 -- # env_ctx= 00:28:25.284 21:24:47 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:28:25.284 21:24:47 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:28:25.284 21:24:47 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:28:25.284 21:24:47 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:28:25.284 21:24:47 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:28:25.284 21:24:47 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=154283 00:28:25.284 21:24:47 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:25.284 21:24:47 -- bdev/blockdev.sh@47 -- # waitforlisten 154283 00:28:25.284 21:24:47 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:25.284 21:24:47 -- common/autotest_common.sh@819 -- # '[' -z 154283 ']' 00:28:25.284 21:24:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.284 21:24:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:25.284 21:24:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.284 21:24:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:25.284 21:24:47 -- common/autotest_common.sh@10 -- # set +x 00:28:25.284 [2024-06-07 21:24:47.870776] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:25.284 [2024-06-07 21:24:47.870966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154283 ] 00:28:25.542 [2024-06-07 21:24:48.024688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.543 [2024-06-07 21:24:48.094044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:25.543 [2024-06-07 21:24:48.094304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.109 21:24:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:26.109 21:24:48 -- common/autotest_common.sh@852 -- # return 0 00:28:26.109 21:24:48 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:28:26.109 21:24:48 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:28:26.109 21:24:48 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:28:26.109 21:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.109 21:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:26.109 Malloc0 00:28:26.109 Malloc1 00:28:26.109 Malloc2 00:28:26.368 21:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.368 21:24:48 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:28:26.368 21:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.368 21:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:26.369 21:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.369 21:24:48 -- bdev/blockdev.sh@738 -- # cat 00:28:26.369 21:24:48 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:28:26.369 21:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.369 21:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:26.369 21:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.369 21:24:48 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:28:26.369 21:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.369 21:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:26.369 21:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.369 21:24:48 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:26.369 21:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.369 21:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:26.369 21:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.369 21:24:48 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:28:26.369 21:24:48 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:28:26.369 21:24:48 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:28:26.369 21:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.369 21:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:26.369 21:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.369 21:24:48 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:28:26.369 21:24:48 -- bdev/blockdev.sh@747 -- # jq -r .name 00:28:26.369 21:24:48 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "5680d61d-9f27-4de9-9224-31b35e65ea30"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5680d61d-9f27-4de9-9224-31b35e65ea30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "5680d61d-9f27-4de9-9224-31b35e65ea30",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "06c88cf4-df65-4719-b136-718be00b8860",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4c39e4be-f0c7-4127-b76f-42308180fcdc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "39c814b9-1d67-4990-a11b-4b2fd3b149cf",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:28:26.369 21:24:48 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:28:26.369 21:24:48 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:28:26.369 21:24:48 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:28:26.369 21:24:48 -- bdev/blockdev.sh@752 -- # killprocess 154283 00:28:26.369 21:24:48 -- common/autotest_common.sh@926 -- # '[' -z 154283 ']' 00:28:26.369 21:24:48 -- common/autotest_common.sh@930 -- # kill -0 154283 00:28:26.369 21:24:48 -- common/autotest_common.sh@931 -- # uname 00:28:26.369 21:24:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:26.369 21:24:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 154283 00:28:26.369 21:24:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:26.369 killing process with pid 154283 00:28:26.369 21:24:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:26.369 21:24:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 154283' 00:28:26.369 21:24:48 -- common/autotest_common.sh@945 -- # kill 154283 00:28:26.369 21:24:48 -- common/autotest_common.sh@950 -- # wait 154283 00:28:26.936 21:24:49 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:26.936 21:24:49 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:28:26.936 21:24:49 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:28:26.936 21:24:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:26.936 21:24:49 -- common/autotest_common.sh@10 -- # set +x 00:28:26.936 ************************************ 00:28:26.936 START TEST bdev_hello_world 00:28:26.936 ************************************ 00:28:26.936 21:24:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:28:26.936 [2024-06-07 21:24:49.493901] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:26.936 [2024-06-07 21:24:49.494389] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154337 ] 00:28:27.195 [2024-06-07 21:24:49.652267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.195 [2024-06-07 21:24:49.718099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.454 [2024-06-07 21:24:49.932554] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:27.454 [2024-06-07 21:24:49.932642] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:28:27.454 [2024-06-07 21:24:49.932684] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:27.454 [2024-06-07 21:24:49.933261] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:27.454 [2024-06-07 21:24:49.933471] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:27.454 [2024-06-07 21:24:49.933533] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:27.454 [2024-06-07 21:24:49.933607] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:27.454 00:28:27.454 [2024-06-07 21:24:49.933662] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:27.712 00:28:27.712 real 0m0.757s 00:28:27.712 user 0m0.401s 00:28:27.712 sys 0m0.240s 00:28:27.712 ************************************ 00:28:27.712 END TEST bdev_hello_world 00:28:27.712 ************************************ 00:28:27.712 21:24:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:27.713 21:24:50 -- common/autotest_common.sh@10 -- # set +x 00:28:27.713 21:24:50 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:28:27.713 21:24:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:27.713 21:24:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:27.713 21:24:50 -- common/autotest_common.sh@10 -- # set +x 00:28:27.713 ************************************ 00:28:27.713 START TEST bdev_bounds 00:28:27.713 ************************************ 00:28:27.713 21:24:50 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:28:27.713 21:24:50 -- bdev/blockdev.sh@288 -- # bdevio_pid=154369 00:28:27.713 21:24:50 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:27.713 Process bdevio pid: 154369 00:28:27.713 21:24:50 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 154369' 00:28:27.713 21:24:50 -- bdev/blockdev.sh@291 -- # waitforlisten 154369 00:28:27.713 21:24:50 -- common/autotest_common.sh@819 -- # '[' -z 154369 ']' 00:28:27.713 21:24:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.713 21:24:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:27.713 21:24:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.713 21:24:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:27.713 21:24:50 -- common/autotest_common.sh@10 -- # set +x 00:28:27.713 21:24:50 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:27.713 [2024-06-07 21:24:50.322041] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:27.713 [2024-06-07 21:24:50.322546] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154369 ] 00:28:27.971 [2024-06-07 21:24:50.510525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:27.971 [2024-06-07 21:24:50.578021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.971 [2024-06-07 21:24:50.578152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.971 [2024-06-07 21:24:50.578149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.906 21:24:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:28.906 21:24:51 -- common/autotest_common.sh@852 -- # return 0 00:28:28.906 21:24:51 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:28.906 I/O targets: 00:28:28.906 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:28:28.906 00:28:28.906 00:28:28.906 CUnit - A unit testing framework for C - Version 2.1-3 00:28:28.906 http://cunit.sourceforge.net/ 00:28:28.906 00:28:28.906 00:28:28.906 Suite: bdevio tests on: raid5f 00:28:28.906 Test: blockdev write read block ...passed 00:28:28.906 Test: blockdev write zeroes read block ...passed 00:28:28.906 Test: blockdev write zeroes read no split ...passed 00:28:28.906 Test: blockdev write zeroes read split ...passed 00:28:28.906 Test: blockdev write zeroes read split partial ...passed 00:28:28.906 Test: blockdev reset ...passed 00:28:28.906 Test: blockdev write read 8 blocks ...passed 00:28:28.906 Test: blockdev write read size > 128k ...passed 00:28:28.906 Test: blockdev write read invalid size ...passed 00:28:28.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:28.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:28.906 Test: blockdev write read max offset ...passed 00:28:28.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:28.906 Test: blockdev writev readv 8 blocks ...passed 00:28:28.906 Test: blockdev writev readv 30 x 1block ...passed 00:28:28.906 Test: blockdev writev readv block ...passed 00:28:28.906 Test: blockdev writev readv size > 128k ...passed 00:28:28.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:28.906 Test: blockdev comparev and writev ...passed 00:28:28.906 Test: blockdev nvme passthru rw ...passed 00:28:28.906 Test: blockdev nvme passthru vendor specific ...passed 00:28:28.906 Test: blockdev nvme admin passthru ...passed 00:28:28.906 Test: blockdev copy ...passed 00:28:28.906 00:28:28.906 Run Summary: Type Total Ran Passed Failed Inactive 00:28:28.906 suites 1 1 n/a 0 0 00:28:28.906 tests 23 23 23 0 0 00:28:28.906 asserts 130 130 130 0 n/a 00:28:28.906 00:28:28.906 Elapsed time = 0.327 seconds 00:28:28.906 0 00:28:28.906 21:24:51 -- bdev/blockdev.sh@293 -- # killprocess 154369 00:28:28.906 21:24:51 -- common/autotest_common.sh@926 -- # '[' -z 154369 ']' 00:28:28.906 21:24:51 -- common/autotest_common.sh@930 -- # kill -0 154369 00:28:28.906 21:24:51 -- common/autotest_common.sh@931 -- # uname 00:28:28.906 21:24:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:28.906 21:24:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 154369 00:28:28.906 killing process with pid 154369 00:28:28.906 21:24:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:28.906 21:24:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:28.906 21:24:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 154369' 00:28:28.906 21:24:51 -- common/autotest_common.sh@945 -- # kill 154369 00:28:28.906 21:24:51 -- common/autotest_common.sh@950 -- # wait 154369 00:28:29.474 ************************************ 00:28:29.474 END TEST bdev_bounds 00:28:29.474 ************************************ 00:28:29.474 21:24:51 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:28:29.474 00:28:29.474 real 0m1.596s 00:28:29.474 user 0m3.942s 00:28:29.474 sys 0m0.336s 00:28:29.474 21:24:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:29.474 21:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:29.474 21:24:51 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:28:29.474 21:24:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:28:29.474 21:24:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:29.474 21:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:29.474 ************************************ 00:28:29.474 START TEST bdev_nbd 00:28:29.474 ************************************ 00:28:29.474 21:24:51 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:28:29.474 21:24:51 -- bdev/blockdev.sh@298 -- # uname -s 00:28:29.474 21:24:51 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:28:29.474 21:24:51 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:29.474 21:24:51 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:29.474 21:24:51 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:28:29.474 21:24:51 -- bdev/blockdev.sh@302 -- # local bdev_all 00:28:29.474 21:24:51 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:28:29.474 21:24:51 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:28:29.474 21:24:51 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:28:29.474 21:24:51 -- bdev/blockdev.sh@309 -- # local nbd_all 00:28:29.474 21:24:51 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:28:29.474 21:24:51 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:28:29.474 21:24:51 -- bdev/blockdev.sh@312 -- # local nbd_list 00:28:29.474 21:24:51 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:28:29.474 21:24:51 -- bdev/blockdev.sh@313 -- # local bdev_list 00:28:29.474 21:24:51 -- bdev/blockdev.sh@316 -- # nbd_pid=154434 00:28:29.474 21:24:51 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:29.474 21:24:51 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:29.474 21:24:51 -- bdev/blockdev.sh@318 -- # waitforlisten 154434 /var/tmp/spdk-nbd.sock 00:28:29.474 21:24:51 -- common/autotest_common.sh@819 -- # '[' -z 154434 ']' 00:28:29.474 21:24:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:29.474 21:24:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:29.474 21:24:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:29.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:29.474 21:24:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:29.474 21:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:29.474 [2024-06-07 21:24:51.966369] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:29.474 [2024-06-07 21:24:51.966801] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.474 [2024-06-07 21:24:52.125505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.733 [2024-06-07 21:24:52.199200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.300 21:24:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:30.300 21:24:52 -- common/autotest_common.sh@852 -- # return 0 00:28:30.300 21:24:52 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:28:30.300 21:24:52 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:30.300 21:24:52 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:28:30.300 21:24:52 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:30.300 21:24:52 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:28:30.300 21:24:52 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:30.300 21:24:52 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:28:30.300 21:24:52 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:30.300 21:24:52 -- bdev/nbd_common.sh@24 -- # local i 00:28:30.300 21:24:52 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:30.300 21:24:52 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:30.300 21:24:52 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:28:30.300 21:24:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:28:30.558 21:24:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:30.558 21:24:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:30.558 21:24:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:30.558 21:24:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:30.558 21:24:53 -- common/autotest_common.sh@857 -- # local i 00:28:30.558 21:24:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:30.558 21:24:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:30.558 21:24:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:30.558 21:24:53 -- common/autotest_common.sh@861 -- # break 00:28:30.558 21:24:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:30.558 21:24:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:30.558 21:24:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:30.558 1+0 records in 00:28:30.558 1+0 records out 00:28:30.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505097 s, 8.1 MB/s 00:28:30.558 21:24:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:30.558 21:24:53 -- common/autotest_common.sh@874 -- # size=4096 00:28:30.558 21:24:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:30.558 21:24:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:30.558 21:24:53 -- common/autotest_common.sh@877 -- # return 0 00:28:30.558 21:24:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:30.558 21:24:53 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:28:30.558 21:24:53 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:30.815 21:24:53 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:30.815 { 00:28:30.815 "nbd_device": "/dev/nbd0", 00:28:30.815 "bdev_name": "raid5f" 00:28:30.815 } 00:28:30.815 ]' 00:28:30.815 21:24:53 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:30.815 21:24:53 -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:30.815 { 00:28:30.815 "nbd_device": "/dev/nbd0", 00:28:30.815 "bdev_name": "raid5f" 00:28:30.815 } 00:28:30.815 ]' 00:28:30.815 21:24:53 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:30.815 21:24:53 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:30.815 21:24:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:30.815 21:24:53 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:30.815 21:24:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:30.815 21:24:53 -- bdev/nbd_common.sh@51 -- # local i 00:28:30.815 21:24:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:30.815 21:24:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:31.073 21:24:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:31.073 21:24:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:31.073 21:24:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:31.073 21:24:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:31.073 21:24:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:31.073 21:24:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:31.073 21:24:53 -- bdev/nbd_common.sh@41 -- # break 00:28:31.073 21:24:53 -- bdev/nbd_common.sh@45 -- # return 0 00:28:31.073 21:24:53 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:31.073 21:24:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:31.073 21:24:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@65 -- # true 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@65 -- # count=0 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@122 -- # count=0 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@127 -- # return 0 00:28:31.331 21:24:53 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@12 -- # local i 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:31.331 21:24:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:28:31.589 /dev/nbd0 00:28:31.589 21:24:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:31.589 21:24:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:31.589 21:24:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:31.589 21:24:54 -- common/autotest_common.sh@857 -- # local i 00:28:31.589 21:24:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:31.589 21:24:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:31.589 21:24:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:31.589 21:24:54 -- common/autotest_common.sh@861 -- # break 00:28:31.589 21:24:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:31.589 21:24:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:31.589 21:24:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:31.589 1+0 records in 00:28:31.589 1+0 records out 00:28:31.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176464 s, 23.2 MB/s 00:28:31.589 21:24:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.589 21:24:54 -- common/autotest_common.sh@874 -- # size=4096 00:28:31.589 21:24:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.589 21:24:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:31.589 21:24:54 -- common/autotest_common.sh@877 -- # return 0 00:28:31.589 21:24:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:31.589 21:24:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:31.589 21:24:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:31.589 21:24:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:31.589 21:24:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:31.847 { 00:28:31.847 "nbd_device": "/dev/nbd0", 00:28:31.847 "bdev_name": "raid5f" 00:28:31.847 } 00:28:31.847 ]' 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:31.847 { 00:28:31.847 "nbd_device": "/dev/nbd0", 00:28:31.847 "bdev_name": "raid5f" 00:28:31.847 } 00:28:31.847 ]' 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@65 -- # count=1 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@66 -- # echo 1 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@95 -- # count=1 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:31.847 256+0 records in 00:28:31.847 256+0 records out 00:28:31.847 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102265 s, 103 MB/s 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:31.847 21:24:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:32.105 256+0 records in 00:28:32.105 256+0 records out 00:28:32.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302246 s, 34.7 MB/s 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@51 -- # local i 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:32.105 21:24:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:32.363 21:24:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:32.363 21:24:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:32.363 21:24:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:32.363 21:24:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:32.363 21:24:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:32.363 21:24:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:32.363 21:24:54 -- bdev/nbd_common.sh@41 -- # break 00:28:32.363 21:24:54 -- bdev/nbd_common.sh@45 -- # return 0 00:28:32.363 21:24:54 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:32.363 21:24:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:32.363 21:24:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@65 -- # true 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@65 -- # count=0 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@104 -- # count=0 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@109 -- # return 0 00:28:32.621 21:24:55 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:28:32.621 21:24:55 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:32.879 malloc_lvol_verify 00:28:32.879 21:24:55 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:32.879 07713fad-e9e4-4564-a5af-5ea3e0727bc1 00:28:32.879 21:24:55 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:33.137 41d1d758-de0b-4e16-aacb-75e2a35a6c67 00:28:33.137 21:24:55 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:33.396 /dev/nbd0 00:28:33.396 21:24:56 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:28:33.396 mke2fs 1.45.5 (07-Jan-2020) 00:28:33.396 00:28:33.396 Filesystem too small for a journal 00:28:33.396 Creating filesystem with 1024 4k blocks and 1024 inodes 00:28:33.396 00:28:33.396 Allocating group tables: 0/1 done 00:28:33.396 Writing inode tables: 0/1 done 00:28:33.396 Writing superblocks and filesystem accounting information: 0/1 done 00:28:33.396 00:28:33.396 21:24:56 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:28:33.396 21:24:56 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:33.396 21:24:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:33.396 21:24:56 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:33.396 21:24:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:33.396 21:24:56 -- bdev/nbd_common.sh@51 -- # local i 00:28:33.396 21:24:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:33.396 21:24:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:33.653 21:24:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:33.653 21:24:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:33.653 21:24:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:33.653 21:24:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:33.653 21:24:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:33.653 21:24:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:33.653 21:24:56 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:28:33.653 21:24:56 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:28:33.653 21:24:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:33.653 21:24:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:33.653 21:24:56 -- bdev/nbd_common.sh@41 -- # break 00:28:33.653 21:24:56 -- bdev/nbd_common.sh@45 -- # return 0 00:28:33.653 21:24:56 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:28:33.653 21:24:56 -- bdev/nbd_common.sh@147 -- # return 0 00:28:33.653 21:24:56 -- bdev/blockdev.sh@324 -- # killprocess 154434 00:28:33.653 21:24:56 -- common/autotest_common.sh@926 -- # '[' -z 154434 ']' 00:28:33.653 21:24:56 -- common/autotest_common.sh@930 -- # kill -0 154434 00:28:33.653 21:24:56 -- common/autotest_common.sh@931 -- # uname 00:28:33.653 21:24:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:33.911 21:24:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 154434 00:28:33.911 21:24:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:33.911 21:24:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:33.911 killing process with pid 154434 00:28:33.911 21:24:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 154434' 00:28:33.911 21:24:56 -- common/autotest_common.sh@945 -- # kill 154434 00:28:33.911 21:24:56 -- common/autotest_common.sh@950 -- # wait 154434 00:28:34.170 21:24:56 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:28:34.170 ************************************ 00:28:34.170 END TEST bdev_nbd 00:28:34.170 ************************************ 00:28:34.170 00:28:34.170 real 0m4.722s 00:28:34.170 user 0m7.147s 00:28:34.170 sys 0m1.051s 00:28:34.170 21:24:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:34.170 21:24:56 -- common/autotest_common.sh@10 -- # set +x 00:28:34.170 21:24:56 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:28:34.170 21:24:56 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:28:34.170 21:24:56 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:28:34.170 21:24:56 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:28:34.170 21:24:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:34.170 21:24:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:34.170 21:24:56 -- common/autotest_common.sh@10 -- # set +x 00:28:34.170 ************************************ 00:28:34.170 START TEST bdev_fio 00:28:34.170 ************************************ 00:28:34.170 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:28:34.170 21:24:56 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:28:34.170 21:24:56 -- bdev/blockdev.sh@329 -- # local env_context 00:28:34.170 21:24:56 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:28:34.170 21:24:56 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:28:34.170 21:24:56 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:28:34.170 21:24:56 -- bdev/blockdev.sh@337 -- # echo '' 00:28:34.170 21:24:56 -- bdev/blockdev.sh@337 -- # env_context= 00:28:34.170 21:24:56 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:28:34.170 21:24:56 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:34.170 21:24:56 -- common/autotest_common.sh@1260 -- # local workload=verify 00:28:34.170 21:24:56 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:28:34.170 21:24:56 -- common/autotest_common.sh@1262 -- # local env_context= 00:28:34.170 21:24:56 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:28:34.170 21:24:56 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:28:34.170 21:24:56 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:28:34.170 21:24:56 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:28:34.170 21:24:56 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:34.170 21:24:56 -- common/autotest_common.sh@1280 -- # cat 00:28:34.170 21:24:56 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:28:34.170 21:24:56 -- common/autotest_common.sh@1293 -- # cat 00:28:34.170 21:24:56 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:28:34.170 21:24:56 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:28:34.170 21:24:56 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:28:34.170 21:24:56 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:28:34.170 21:24:56 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:28:34.170 21:24:56 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:28:34.170 21:24:56 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:28:34.170 21:24:56 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:28:34.170 21:24:56 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:34.170 21:24:56 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:28:34.170 21:24:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:34.170 21:24:56 -- common/autotest_common.sh@10 -- # set +x 00:28:34.170 ************************************ 00:28:34.170 START TEST bdev_fio_rw_verify 00:28:34.170 ************************************ 00:28:34.170 21:24:56 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:34.170 21:24:56 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:34.170 21:24:56 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:34.170 21:24:56 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:28:34.170 21:24:56 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:34.170 21:24:56 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:34.170 21:24:56 -- common/autotest_common.sh@1320 -- # shift 00:28:34.170 21:24:56 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:34.170 21:24:56 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:34.170 21:24:56 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:34.170 21:24:56 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:34.170 21:24:56 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:34.170 21:24:56 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:28:34.170 21:24:56 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:28:34.170 21:24:56 -- common/autotest_common.sh@1326 -- # break 00:28:34.170 21:24:56 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:34.170 21:24:56 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:34.429 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:34.430 fio-3.35 00:28:34.430 Starting 1 thread 00:28:46.633 00:28:46.633 job_raid5f: (groupid=0, jobs=1): err= 0: pid=154668: Fri Jun 7 21:25:07 2024 00:28:46.633 read: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(432MiB/10001msec) 00:28:46.633 slat (nsec): min=18403, max=95197, avg=21660.13, stdev=4712.92 00:28:46.633 clat (usec): min=10, max=376, avg=142.80, stdev=53.98 00:28:46.633 lat (usec): min=32, max=410, avg=164.46, stdev=55.02 00:28:46.633 clat percentiles (usec): 00:28:46.633 | 50.000th=[ 143], 99.000th=[ 273], 99.900th=[ 314], 99.990th=[ 347], 00:28:46.633 | 99.999th=[ 371] 00:28:46.633 write: IOPS=11.6k, BW=45.2MiB/s (47.4MB/s)(447MiB/9881msec); 0 zone resets 00:28:46.633 slat (usec): min=9, max=156, avg=19.25, stdev= 5.07 00:28:46.633 clat (usec): min=61, max=1462, avg=327.17, stdev=57.23 00:28:46.633 lat (usec): min=79, max=1483, avg=346.42, stdev=59.16 00:28:46.633 clat percentiles (usec): 00:28:46.633 | 50.000th=[ 326], 99.000th=[ 486], 99.900th=[ 578], 99.990th=[ 1045], 00:28:46.633 | 99.999th=[ 1450] 00:28:46.633 bw ( KiB/s): min=42328, max=50640, per=98.93%, avg=45797.89, stdev=2101.04, samples=19 00:28:46.633 iops : min=10582, max=12660, avg=11449.47, stdev=525.26, samples=19 00:28:46.633 lat (usec) : 20=0.01%, 50=0.01%, 100=12.19%, 250=39.48%, 500=47.99% 00:28:46.633 lat (usec) : 750=0.31%, 1000=0.02% 00:28:46.633 lat (msec) : 2=0.01% 00:28:46.633 cpu : usr=99.26%, sys=0.71%, ctx=33, majf=0, minf=10927 00:28:46.633 IO depths : 1=7.7%, 2=20.0%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:46.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.633 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.633 issued rwts: total=110604,114357,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:46.633 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:46.633 00:28:46.633 Run status group 0 (all jobs): 00:28:46.633 READ: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=432MiB (453MB), run=10001-10001msec 00:28:46.633 WRITE: bw=45.2MiB/s (47.4MB/s), 45.2MiB/s-45.2MiB/s (47.4MB/s-47.4MB/s), io=447MiB (468MB), run=9881-9881msec 00:28:46.633 ----------------------------------------------------- 00:28:46.633 Suppressions used: 00:28:46.633 count bytes template 00:28:46.633 1 7 /usr/src/fio/parse.c 00:28:46.633 320 30720 /usr/src/fio/iolog.c 00:28:46.633 2 596 libcrypto.so 00:28:46.633 ----------------------------------------------------- 00:28:46.633 00:28:46.633 00:28:46.633 real 0m11.319s 00:28:46.633 user 0m11.794s 00:28:46.633 sys 0m0.645s 00:28:46.633 ************************************ 00:28:46.633 END TEST bdev_fio_rw_verify 00:28:46.633 ************************************ 00:28:46.633 21:25:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:46.634 21:25:08 -- common/autotest_common.sh@10 -- # set +x 00:28:46.634 21:25:08 -- bdev/blockdev.sh@348 -- # rm -f 00:28:46.634 21:25:08 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:46.634 21:25:08 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:28:46.634 21:25:08 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:46.634 21:25:08 -- common/autotest_common.sh@1260 -- # local workload=trim 00:28:46.634 21:25:08 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:28:46.634 21:25:08 -- common/autotest_common.sh@1262 -- # local env_context= 00:28:46.634 21:25:08 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:28:46.634 21:25:08 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:28:46.634 21:25:08 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:28:46.634 21:25:08 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:28:46.634 21:25:08 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:46.634 21:25:08 -- common/autotest_common.sh@1280 -- # cat 00:28:46.634 21:25:08 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:28:46.634 21:25:08 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:28:46.634 21:25:08 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:28:46.634 21:25:08 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:46.634 21:25:08 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "5680d61d-9f27-4de9-9224-31b35e65ea30"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5680d61d-9f27-4de9-9224-31b35e65ea30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "5680d61d-9f27-4de9-9224-31b35e65ea30",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "06c88cf4-df65-4719-b136-718be00b8860",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4c39e4be-f0c7-4127-b76f-42308180fcdc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "39c814b9-1d67-4990-a11b-4b2fd3b149cf",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:28:46.634 21:25:08 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:28:46.634 21:25:08 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:46.634 /home/vagrant/spdk_repo/spdk 00:28:46.634 21:25:08 -- bdev/blockdev.sh@360 -- # popd 00:28:46.634 ************************************ 00:28:46.634 END TEST bdev_fio 00:28:46.634 ************************************ 00:28:46.634 21:25:08 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:28:46.634 21:25:08 -- bdev/blockdev.sh@362 -- # return 0 00:28:46.634 00:28:46.634 real 0m11.497s 00:28:46.634 user 0m11.922s 00:28:46.634 sys 0m0.691s 00:28:46.634 21:25:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:46.634 21:25:08 -- common/autotest_common.sh@10 -- # set +x 00:28:46.634 21:25:08 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:46.634 21:25:08 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:46.634 21:25:08 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:28:46.634 21:25:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:46.634 21:25:08 -- common/autotest_common.sh@10 -- # set +x 00:28:46.634 ************************************ 00:28:46.634 START TEST bdev_verify 00:28:46.634 ************************************ 00:28:46.634 21:25:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:46.634 [2024-06-07 21:25:08.282410] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:46.634 [2024-06-07 21:25:08.282651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154838 ] 00:28:46.634 [2024-06-07 21:25:08.451287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:46.634 [2024-06-07 21:25:08.531128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.634 [2024-06-07 21:25:08.531131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.634 Running I/O for 5 seconds... 00:28:51.929 00:28:51.929 Latency(us) 00:28:51.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.929 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:51.929 Verification LBA range: start 0x0 length 0x2000 00:28:51.929 raid5f : 5.01 11909.26 46.52 0.00 0.00 17029.15 196.42 15013.70 00:28:51.929 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:51.929 Verification LBA range: start 0x2000 length 0x2000 00:28:51.929 raid5f : 5.01 11959.68 46.72 0.00 0.00 16954.73 185.25 15073.28 00:28:51.929 =================================================================================================================== 00:28:51.929 Total : 23868.94 93.24 0.00 0.00 16991.85 185.25 15073.28 00:28:51.929 00:28:51.929 real 0m5.802s 00:28:51.929 user 0m10.826s 00:28:51.929 sys 0m0.236s 00:28:51.929 ************************************ 00:28:51.929 END TEST bdev_verify 00:28:51.929 ************************************ 00:28:51.929 21:25:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:51.929 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:51.929 21:25:14 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:51.929 21:25:14 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:28:51.929 21:25:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:51.929 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:51.929 ************************************ 00:28:51.929 START TEST bdev_verify_big_io 00:28:51.929 ************************************ 00:28:51.929 21:25:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:51.929 [2024-06-07 21:25:14.123786] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:51.929 [2024-06-07 21:25:14.124290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154946 ] 00:28:51.929 [2024-06-07 21:25:14.276997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:51.929 [2024-06-07 21:25:14.347275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.929 [2024-06-07 21:25:14.347282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.929 Running I/O for 5 seconds... 00:28:57.229 00:28:57.229 Latency(us) 00:28:57.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.229 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:57.229 Verification LBA range: start 0x0 length 0x200 00:28:57.229 raid5f : 5.19 693.59 43.35 0.00 0.00 4797266.26 255.07 183024.17 00:28:57.229 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:57.229 Verification LBA range: start 0x200 length 0x200 00:28:57.229 raid5f : 5.19 688.31 43.02 0.00 0.00 4835291.39 346.30 205902.20 00:28:57.229 =================================================================================================================== 00:28:57.229 Total : 1381.89 86.37 0.00 0.00 4816201.94 255.07 205902.20 00:28:57.487 ************************************ 00:28:57.487 END TEST bdev_verify_big_io 00:28:57.487 ************************************ 00:28:57.487 00:28:57.487 real 0m5.957s 00:28:57.487 user 0m11.147s 00:28:57.487 sys 0m0.253s 00:28:57.487 21:25:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:57.487 21:25:20 -- common/autotest_common.sh@10 -- # set +x 00:28:57.487 21:25:20 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:57.487 21:25:20 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:57.487 21:25:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:57.487 21:25:20 -- common/autotest_common.sh@10 -- # set +x 00:28:57.487 ************************************ 00:28:57.487 START TEST bdev_write_zeroes 00:28:57.487 ************************************ 00:28:57.488 21:25:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:57.488 [2024-06-07 21:25:20.133687] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:57.488 [2024-06-07 21:25:20.133984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155044 ] 00:28:57.746 [2024-06-07 21:25:20.281278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.746 [2024-06-07 21:25:20.334824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.005 Running I/O for 1 seconds... 00:28:58.941 00:28:58.941 Latency(us) 00:28:58.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.941 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:58.941 raid5f : 1.00 26667.08 104.17 0.00 0.00 4784.80 1370.30 6434.44 00:28:58.941 =================================================================================================================== 00:28:58.941 Total : 26667.08 104.17 0.00 0.00 4784.80 1370.30 6434.44 00:28:59.200 ************************************ 00:28:59.200 END TEST bdev_write_zeroes 00:28:59.200 ************************************ 00:28:59.200 00:28:59.200 real 0m1.708s 00:28:59.200 user 0m1.380s 00:28:59.200 sys 0m0.212s 00:28:59.200 21:25:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:59.200 21:25:21 -- common/autotest_common.sh@10 -- # set +x 00:28:59.200 21:25:21 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:59.200 21:25:21 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:59.200 21:25:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:59.200 21:25:21 -- common/autotest_common.sh@10 -- # set +x 00:28:59.200 ************************************ 00:28:59.200 START TEST bdev_json_nonenclosed 00:28:59.200 ************************************ 00:28:59.200 21:25:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:59.460 [2024-06-07 21:25:21.916153] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:59.460 [2024-06-07 21:25:21.916729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155089 ] 00:28:59.460 [2024-06-07 21:25:22.086539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.718 [2024-06-07 21:25:22.145182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.718 [2024-06-07 21:25:22.145651] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:59.718 [2024-06-07 21:25:22.145790] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:59.718 ************************************ 00:28:59.719 END TEST bdev_json_nonenclosed 00:28:59.719 ************************************ 00:28:59.719 00:28:59.719 real 0m0.388s 00:28:59.719 user 0m0.171s 00:28:59.719 sys 0m0.116s 00:28:59.719 21:25:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:59.719 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:28:59.719 21:25:22 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:59.719 21:25:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:59.719 21:25:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:59.719 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:28:59.719 ************************************ 00:28:59.719 START TEST bdev_json_nonarray 00:28:59.719 ************************************ 00:28:59.719 21:25:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:59.719 [2024-06-07 21:25:22.345965] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:59.719 [2024-06-07 21:25:22.346350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155119 ] 00:28:59.977 [2024-06-07 21:25:22.510787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.977 [2024-06-07 21:25:22.588555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.977 [2024-06-07 21:25:22.589073] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:59.977 [2024-06-07 21:25:22.589207] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:00.237 00:29:00.237 real 0m0.403s 00:29:00.237 user 0m0.196s 00:29:00.237 sys 0m0.104s 00:29:00.237 21:25:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.237 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:00.237 ************************************ 00:29:00.237 END TEST bdev_json_nonarray 00:29:00.237 ************************************ 00:29:00.237 21:25:22 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:29:00.237 21:25:22 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:29:00.237 21:25:22 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:29:00.237 21:25:22 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:29:00.237 21:25:22 -- bdev/blockdev.sh@809 -- # cleanup 00:29:00.237 21:25:22 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:00.237 21:25:22 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:00.237 21:25:22 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:29:00.237 21:25:22 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:29:00.237 21:25:22 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:29:00.237 21:25:22 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:29:00.237 ************************************ 00:29:00.237 END TEST blockdev_raid5f 00:29:00.237 ************************************ 00:29:00.237 00:29:00.237 real 0m35.009s 00:29:00.237 user 0m49.169s 00:29:00.237 sys 0m3.815s 00:29:00.237 21:25:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.237 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:00.237 21:25:22 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:29:00.237 21:25:22 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:29:00.237 21:25:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:00.237 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:00.237 21:25:22 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:29:00.237 21:25:22 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:29:00.237 21:25:22 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:29:00.237 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:29:01.611 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:01.611 Waiting for block devices as requested 00:29:01.869 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:29:02.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:02.128 Cleaning 00:29:02.128 Removing: /var/run/dpdk/spdk0/config 00:29:02.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:02.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:02.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:02.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:02.128 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:02.128 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:02.128 Removing: /dev/shm/spdk_tgt_trace.pid117027 00:29:02.128 Removing: /var/run/dpdk/spdk0 00:29:02.128 Removing: /var/run/dpdk/spdk_pid116843 00:29:02.128 Removing: /var/run/dpdk/spdk_pid117027 00:29:02.128 Removing: /var/run/dpdk/spdk_pid117295 00:29:02.128 Removing: /var/run/dpdk/spdk_pid117548 00:29:02.128 Removing: /var/run/dpdk/spdk_pid117723 00:29:02.128 Removing: /var/run/dpdk/spdk_pid117795 00:29:02.128 Removing: /var/run/dpdk/spdk_pid117882 00:29:02.128 Removing: /var/run/dpdk/spdk_pid117975 00:29:02.387 Removing: /var/run/dpdk/spdk_pid118056 00:29:02.387 Removing: /var/run/dpdk/spdk_pid118104 00:29:02.387 Removing: /var/run/dpdk/spdk_pid118147 00:29:02.387 Removing: /var/run/dpdk/spdk_pid118232 00:29:02.387 Removing: /var/run/dpdk/spdk_pid118358 00:29:02.387 Removing: /var/run/dpdk/spdk_pid118906 00:29:02.387 Removing: /var/run/dpdk/spdk_pid118967 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119022 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119043 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119131 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119152 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119221 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119242 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119293 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119310 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119378 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119397 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119532 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119570 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119613 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119691 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119763 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119794 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119887 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119914 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119956 00:29:02.387 Removing: /var/run/dpdk/spdk_pid119982 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120025 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120047 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120094 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120143 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120182 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120211 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120251 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120280 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120321 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120366 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120408 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120435 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120477 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120504 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120546 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120578 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120633 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120667 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120702 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120736 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120778 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120805 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120866 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120893 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120935 00:29:02.387 Removing: /var/run/dpdk/spdk_pid120969 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121004 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121038 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121102 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121132 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121180 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121214 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121258 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121286 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121345 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121373 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121421 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121488 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121591 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121751 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121829 00:29:02.387 Removing: /var/run/dpdk/spdk_pid121874 00:29:02.387 Removing: /var/run/dpdk/spdk_pid123142 00:29:02.387 Removing: /var/run/dpdk/spdk_pid123360 00:29:02.387 Removing: /var/run/dpdk/spdk_pid123566 00:29:02.387 Removing: /var/run/dpdk/spdk_pid123665 00:29:02.387 Removing: /var/run/dpdk/spdk_pid123792 00:29:02.387 Removing: /var/run/dpdk/spdk_pid123848 00:29:02.387 Removing: /var/run/dpdk/spdk_pid123870 00:29:02.387 Removing: /var/run/dpdk/spdk_pid123901 00:29:02.387 Removing: /var/run/dpdk/spdk_pid124416 00:29:02.387 Removing: /var/run/dpdk/spdk_pid124519 00:29:02.387 Removing: /var/run/dpdk/spdk_pid124620 00:29:02.387 Removing: /var/run/dpdk/spdk_pid124666 00:29:02.387 Removing: /var/run/dpdk/spdk_pid125898 00:29:02.387 Removing: /var/run/dpdk/spdk_pid126810 00:29:02.387 Removing: /var/run/dpdk/spdk_pid127735 00:29:02.387 Removing: /var/run/dpdk/spdk_pid128897 00:29:02.387 Removing: /var/run/dpdk/spdk_pid130014 00:29:02.387 Removing: /var/run/dpdk/spdk_pid131120 00:29:02.387 Removing: /var/run/dpdk/spdk_pid132675 00:29:02.387 Removing: /var/run/dpdk/spdk_pid133931 00:29:02.387 Removing: /var/run/dpdk/spdk_pid135187 00:29:02.387 Removing: /var/run/dpdk/spdk_pid135897 00:29:02.387 Removing: /var/run/dpdk/spdk_pid136473 00:29:02.387 Removing: /var/run/dpdk/spdk_pid137161 00:29:02.387 Removing: /var/run/dpdk/spdk_pid137645 00:29:02.387 Removing: /var/run/dpdk/spdk_pid138226 00:29:02.387 Removing: /var/run/dpdk/spdk_pid138809 00:29:02.387 Removing: /var/run/dpdk/spdk_pid139525 00:29:02.387 Removing: /var/run/dpdk/spdk_pid140075 00:29:02.387 Removing: /var/run/dpdk/spdk_pid141510 00:29:02.646 Removing: /var/run/dpdk/spdk_pid142148 00:29:02.646 Removing: /var/run/dpdk/spdk_pid142717 00:29:02.646 Removing: /var/run/dpdk/spdk_pid144308 00:29:02.646 Removing: /var/run/dpdk/spdk_pid145011 00:29:02.646 Removing: /var/run/dpdk/spdk_pid145675 00:29:02.646 Removing: /var/run/dpdk/spdk_pid146486 00:29:02.646 Removing: /var/run/dpdk/spdk_pid146536 00:29:02.646 Removing: /var/run/dpdk/spdk_pid146573 00:29:02.646 Removing: /var/run/dpdk/spdk_pid146619 00:29:02.646 Removing: /var/run/dpdk/spdk_pid146753 00:29:02.646 Removing: /var/run/dpdk/spdk_pid146898 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147123 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147402 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147417 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147465 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147487 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147508 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147546 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147566 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147587 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147614 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147634 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147654 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147676 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147695 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147712 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147757 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147773 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147794 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147821 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147841 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147862 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147902 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147925 00:29:02.646 Removing: /var/run/dpdk/spdk_pid147956 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148025 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148082 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148103 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148142 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148148 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148165 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148217 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148236 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148272 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148292 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148305 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148338 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148347 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148360 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148376 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148382 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148420 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148454 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148466 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148501 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148522 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148525 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148582 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148589 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148625 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148659 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148671 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148683 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148700 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148705 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148722 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148738 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148816 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148866 00:29:02.646 Removing: /var/run/dpdk/spdk_pid148999 00:29:02.646 Removing: /var/run/dpdk/spdk_pid149016 00:29:02.646 Removing: /var/run/dpdk/spdk_pid149063 00:29:02.646 Removing: /var/run/dpdk/spdk_pid149121 00:29:02.646 Removing: /var/run/dpdk/spdk_pid149147 00:29:02.646 Removing: /var/run/dpdk/spdk_pid149169 00:29:02.646 Removing: /var/run/dpdk/spdk_pid149211 00:29:02.646 Removing: /var/run/dpdk/spdk_pid149248 00:29:02.646 Removing: /var/run/dpdk/spdk_pid149269 00:29:02.646 Removing: /var/run/dpdk/spdk_pid149348 00:29:02.646 Removing: /var/run/dpdk/spdk_pid149395 00:29:02.646 Removing: /var/run/dpdk/spdk_pid149442 00:29:02.646 Removing: /var/run/dpdk/spdk_pid149706 00:29:02.646 Removing: /var/run/dpdk/spdk_pid149817 00:29:02.646 Removing: /var/run/dpdk/spdk_pid149842 00:29:02.646 Removing: /var/run/dpdk/spdk_pid149956 00:29:02.646 Removing: /var/run/dpdk/spdk_pid150023 00:29:02.646 Removing: /var/run/dpdk/spdk_pid150052 00:29:02.646 Removing: /var/run/dpdk/spdk_pid150297 00:29:02.646 Removing: /var/run/dpdk/spdk_pid150517 00:29:02.646 Removing: /var/run/dpdk/spdk_pid150605 00:29:02.646 Removing: /var/run/dpdk/spdk_pid150655 00:29:02.646 Removing: /var/run/dpdk/spdk_pid150677 00:29:02.646 Removing: /var/run/dpdk/spdk_pid150761 00:29:02.646 Removing: /var/run/dpdk/spdk_pid151298 00:29:02.646 Removing: /var/run/dpdk/spdk_pid151330 00:29:02.646 Removing: /var/run/dpdk/spdk_pid151659 00:29:02.646 Removing: /var/run/dpdk/spdk_pid151814 00:29:02.646 Removing: /var/run/dpdk/spdk_pid151908 00:29:02.646 Removing: /var/run/dpdk/spdk_pid151966 00:29:02.646 Removing: /var/run/dpdk/spdk_pid151993 00:29:02.646 Removing: /var/run/dpdk/spdk_pid152028 00:29:02.646 Removing: /var/run/dpdk/spdk_pid153417 00:29:02.646 Removing: /var/run/dpdk/spdk_pid153535 00:29:02.646 Removing: /var/run/dpdk/spdk_pid153549 00:29:02.905 Removing: /var/run/dpdk/spdk_pid153566 00:29:02.905 Removing: /var/run/dpdk/spdk_pid154063 00:29:02.905 Removing: /var/run/dpdk/spdk_pid154155 00:29:02.905 Removing: /var/run/dpdk/spdk_pid154283 00:29:02.905 Removing: /var/run/dpdk/spdk_pid154337 00:29:02.905 Removing: /var/run/dpdk/spdk_pid154369 00:29:02.905 Removing: /var/run/dpdk/spdk_pid154648 00:29:02.905 Removing: /var/run/dpdk/spdk_pid154838 00:29:02.905 Removing: /var/run/dpdk/spdk_pid154946 00:29:02.905 Removing: /var/run/dpdk/spdk_pid155044 00:29:02.905 Removing: /var/run/dpdk/spdk_pid155089 00:29:02.905 Removing: /var/run/dpdk/spdk_pid155119 00:29:02.905 Clean 00:29:02.905 killing process with pid 105984 00:29:02.905 killing process with pid 106008 00:29:02.905 21:25:25 -- common/autotest_common.sh@1436 -- # return 0 00:29:02.905 21:25:25 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:29:02.905 21:25:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:02.905 21:25:25 -- common/autotest_common.sh@10 -- # set +x 00:29:02.905 21:25:25 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:29:02.905 21:25:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:02.905 21:25:25 -- common/autotest_common.sh@10 -- # set +x 00:29:02.905 21:25:25 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:02.905 21:25:25 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:02.905 21:25:25 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:02.905 21:25:25 -- spdk/autotest.sh@394 -- # hash lcov 00:29:02.905 21:25:25 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:02.905 21:25:25 -- spdk/autotest.sh@396 -- # hostname 00:29:02.905 21:25:25 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:03.163 geninfo: WARNING: invalid characters removed from testname! 00:29:49.882 21:26:06 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:49.882 21:26:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:52.410 21:26:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:55.692 21:26:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:58.226 21:26:20 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:00.757 21:26:23 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:04.052 21:26:26 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:04.052 21:26:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:04.052 21:26:26 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:04.052 21:26:26 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.052 21:26:26 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.052 21:26:26 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:04.052 21:26:26 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:04.052 21:26:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:04.052 21:26:26 -- paths/export.sh@5 -- $ export PATH 00:30:04.052 21:26:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:04.052 21:26:26 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:04.052 21:26:26 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:04.052 21:26:26 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1717795586.XXXXXX 00:30:04.052 21:26:26 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1717795586.w4pP0Q 00:30:04.052 21:26:26 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:04.052 21:26:26 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:30:04.052 21:26:26 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:30:04.052 21:26:26 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:30:04.052 21:26:26 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:04.052 21:26:26 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:04.052 21:26:26 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:04.052 21:26:26 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:30:04.052 21:26:26 -- common/autotest_common.sh@10 -- $ set +x 00:30:04.052 21:26:26 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:30:04.052 21:26:26 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:30:04.052 21:26:26 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:30:04.052 21:26:26 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:04.052 21:26:26 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:04.052 21:26:26 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:04.052 21:26:26 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:30:04.052 21:26:26 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:30:04.052 21:26:26 -- common/autotest_common.sh@10 -- $ set +x 00:30:04.052 21:26:26 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:30:04.052 21:26:26 -- spdk/autopackage.sh@36 -- $ [[ -n v23.11 ]] 00:30:04.052 21:26:26 -- spdk/autopackage.sh@36 -- $ [[ -e /tmp/spdk-ld-path ]] 00:30:04.052 21:26:26 -- spdk/autopackage.sh@37 -- $ source /tmp/spdk-ld-path 00:30:04.052 21:26:26 -- tmp/spdk-ld-path@1 -- $ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:04.052 21:26:26 -- tmp/spdk-ld-path@1 -- $ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:04.052 21:26:26 -- tmp/spdk-ld-path@2 -- $ export PKG_CONFIG_PATH= 00:30:04.052 21:26:26 -- tmp/spdk-ld-path@2 -- $ PKG_CONFIG_PATH= 00:30:04.052 21:26:26 -- spdk/autopackage.sh@40 -- $ get_config_params 00:30:04.052 21:26:26 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:30:04.052 21:26:26 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:30:04.052 21:26:26 -- common/autotest_common.sh@10 -- $ set +x 00:30:04.052 21:26:26 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:30:04.052 21:26:26 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --enable-lto 00:30:04.052 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:30:04.052 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:30:04.052 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:30:04.052 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:30:04.052 Using 'verbs' RDMA provider 00:30:16.822 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:30:29.043 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:30:29.043 Creating mk/config.mk...done. 00:30:29.043 Creating mk/cc.flags.mk...done. 00:30:29.043 Type 'make' to build. 00:30:29.043 21:26:50 -- spdk/autopackage.sh@43 -- $ make -j10 00:30:29.043 make[1]: Nothing to be done for 'all'. 00:30:29.043 CC lib/log/log.o 00:30:29.043 CC lib/log/log_flags.o 00:30:29.043 CC lib/log/log_deprecated.o 00:30:29.043 CC lib/ut_mock/mock.o 00:30:29.043 CC lib/ut/ut.o 00:30:29.043 LIB libspdk_ut_mock.a 00:30:29.043 LIB libspdk_log.a 00:30:29.043 LIB libspdk_ut.a 00:30:29.043 CC lib/ioat/ioat.o 00:30:29.043 CC lib/dma/dma.o 00:30:29.043 CXX lib/trace_parser/trace.o 00:30:29.043 CC lib/util/base64.o 00:30:29.043 CC lib/util/bit_array.o 00:30:29.043 CC lib/util/cpuset.o 00:30:29.043 CC lib/util/crc16.o 00:30:29.043 CC lib/util/crc32.o 00:30:29.043 CC lib/util/crc32c.o 00:30:29.043 CC lib/vfio_user/host/vfio_user_pci.o 00:30:29.043 CC lib/util/crc32_ieee.o 00:30:29.043 CC lib/vfio_user/host/vfio_user.o 00:30:29.043 CC lib/util/crc64.o 00:30:29.043 CC lib/util/dif.o 00:30:29.043 LIB libspdk_dma.a 00:30:29.043 CC lib/util/fd.o 00:30:29.043 CC lib/util/file.o 00:30:29.043 CC lib/util/hexlify.o 00:30:29.043 LIB libspdk_ioat.a 00:30:29.043 CC lib/util/iov.o 00:30:29.043 CC lib/util/math.o 00:30:29.043 CC lib/util/pipe.o 00:30:29.043 CC lib/util/strerror_tls.o 00:30:29.043 CC lib/util/string.o 00:30:29.043 CC lib/util/uuid.o 00:30:29.043 LIB libspdk_vfio_user.a 00:30:29.043 CC lib/util/fd_group.o 00:30:29.043 CC lib/util/xor.o 00:30:29.043 CC lib/util/zipf.o 00:30:29.043 LIB libspdk_util.a 00:30:29.302 LIB libspdk_trace_parser.a 00:30:29.302 CC lib/rdma/common.o 00:30:29.302 CC lib/rdma/rdma_verbs.o 00:30:29.302 CC lib/json/json_parse.o 00:30:29.302 CC lib/json/json_util.o 00:30:29.302 CC lib/idxd/idxd.o 00:30:29.302 CC lib/json/json_write.o 00:30:29.302 CC lib/vmd/vmd.o 00:30:29.302 CC lib/vmd/led.o 00:30:29.302 CC lib/conf/conf.o 00:30:29.302 CC lib/env_dpdk/env.o 00:30:29.302 CC lib/env_dpdk/memory.o 00:30:29.302 CC lib/env_dpdk/pci.o 00:30:29.302 CC lib/env_dpdk/init.o 00:30:29.302 LIB libspdk_conf.a 00:30:29.302 CC lib/env_dpdk/threads.o 00:30:29.302 LIB libspdk_rdma.a 00:30:29.560 CC lib/env_dpdk/pci_ioat.o 00:30:29.560 LIB libspdk_json.a 00:30:29.560 CC lib/env_dpdk/pci_virtio.o 00:30:29.560 CC lib/env_dpdk/pci_vmd.o 00:30:29.560 CC lib/idxd/idxd_user.o 00:30:29.560 CC lib/env_dpdk/pci_idxd.o 00:30:29.560 CC lib/env_dpdk/pci_event.o 00:30:29.560 CC lib/env_dpdk/sigbus_handler.o 00:30:29.560 CC lib/env_dpdk/pci_dpdk.o 00:30:29.560 CC lib/env_dpdk/pci_dpdk_2207.o 00:30:29.560 LIB libspdk_vmd.a 00:30:29.560 CC lib/env_dpdk/pci_dpdk_2211.o 00:30:29.818 LIB libspdk_idxd.a 00:30:29.818 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:30:29.818 CC lib/jsonrpc/jsonrpc_server.o 00:30:29.818 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:30:29.818 CC lib/jsonrpc/jsonrpc_client.o 00:30:29.818 LIB libspdk_jsonrpc.a 00:30:30.076 CC lib/rpc/rpc.o 00:30:30.076 LIB libspdk_env_dpdk.a 00:30:30.076 LIB libspdk_rpc.a 00:30:30.335 CC lib/sock/sock.o 00:30:30.335 CC lib/sock/sock_rpc.o 00:30:30.335 CC lib/notify/notify.o 00:30:30.335 CC lib/notify/notify_rpc.o 00:30:30.335 CC lib/trace/trace.o 00:30:30.335 CC lib/trace/trace_flags.o 00:30:30.335 CC lib/trace/trace_rpc.o 00:30:30.335 LIB libspdk_notify.a 00:30:30.596 LIB libspdk_trace.a 00:30:30.596 LIB libspdk_sock.a 00:30:30.596 CC lib/thread/iobuf.o 00:30:30.596 CC lib/thread/thread.o 00:30:30.596 CC lib/nvme/nvme_ctrlr_cmd.o 00:30:30.596 CC lib/nvme/nvme_ctrlr.o 00:30:30.596 CC lib/nvme/nvme_fabric.o 00:30:30.596 CC lib/nvme/nvme_ns_cmd.o 00:30:30.596 CC lib/nvme/nvme_ns.o 00:30:30.596 CC lib/nvme/nvme_pcie.o 00:30:30.596 CC lib/nvme/nvme_qpair.o 00:30:30.596 CC lib/nvme/nvme_pcie_common.o 00:30:30.855 CC lib/nvme/nvme.o 00:30:31.120 CC lib/nvme/nvme_quirks.o 00:30:31.120 CC lib/nvme/nvme_transport.o 00:30:31.120 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:30:31.120 CC lib/nvme/nvme_discovery.o 00:30:31.120 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:30:31.418 CC lib/nvme/nvme_tcp.o 00:30:31.418 LIB libspdk_thread.a 00:30:31.418 CC lib/nvme/nvme_opal.o 00:30:31.418 CC lib/nvme/nvme_io_msg.o 00:30:31.418 CC lib/accel/accel.o 00:30:31.674 CC lib/blob/blobstore.o 00:30:31.674 CC lib/blob/request.o 00:30:31.674 CC lib/blob/zeroes.o 00:30:31.674 CC lib/blob/blob_bs_dev.o 00:30:31.674 CC lib/nvme/nvme_poll_group.o 00:30:31.674 CC lib/virtio/virtio.o 00:30:31.674 CC lib/init/json_config.o 00:30:31.674 CC lib/nvme/nvme_zns.o 00:30:31.674 CC lib/init/subsystem.o 00:30:31.674 CC lib/nvme/nvme_cuse.o 00:30:31.930 CC lib/virtio/virtio_vhost_user.o 00:30:31.930 CC lib/init/subsystem_rpc.o 00:30:31.930 CC lib/nvme/nvme_vfio_user.o 00:30:31.930 CC lib/nvme/nvme_rdma.o 00:30:31.930 CC lib/virtio/virtio_vfio_user.o 00:30:31.930 CC lib/init/rpc.o 00:30:32.187 LIB libspdk_init.a 00:30:32.187 CC lib/accel/accel_rpc.o 00:30:32.187 CC lib/virtio/virtio_pci.o 00:30:32.187 CC lib/accel/accel_sw.o 00:30:32.187 CC lib/event/app.o 00:30:32.187 CC lib/event/reactor.o 00:30:32.187 CC lib/event/log_rpc.o 00:30:32.187 CC lib/event/app_rpc.o 00:30:32.445 LIB libspdk_virtio.a 00:30:32.445 LIB libspdk_accel.a 00:30:32.445 CC lib/event/scheduler_static.o 00:30:32.445 CC lib/bdev/bdev_rpc.o 00:30:32.445 CC lib/bdev/bdev.o 00:30:32.445 CC lib/bdev/part.o 00:30:32.445 CC lib/bdev/bdev_zone.o 00:30:32.445 CC lib/bdev/scsi_nvme.o 00:30:32.445 LIB libspdk_event.a 00:30:32.702 LIB libspdk_nvme.a 00:30:32.960 LIB libspdk_blob.a 00:30:33.218 CC lib/lvol/lvol.o 00:30:33.219 CC lib/blobfs/tree.o 00:30:33.219 CC lib/blobfs/blobfs.o 00:30:33.476 LIB libspdk_blobfs.a 00:30:33.734 LIB libspdk_lvol.a 00:30:33.734 LIB libspdk_bdev.a 00:30:33.992 CC lib/scsi/dev.o 00:30:33.992 CC lib/nvmf/ctrlr.o 00:30:33.992 CC lib/scsi/lun.o 00:30:33.992 CC lib/scsi/port.o 00:30:33.992 CC lib/nvmf/ctrlr_bdev.o 00:30:33.992 CC lib/nvmf/ctrlr_discovery.o 00:30:33.992 CC lib/nvmf/subsystem.o 00:30:33.992 CC lib/scsi/scsi.o 00:30:33.992 CC lib/ftl/ftl_core.o 00:30:33.992 CC lib/nbd/nbd.o 00:30:34.250 CC lib/nbd/nbd_rpc.o 00:30:34.250 CC lib/nvmf/nvmf_rpc.o 00:30:34.250 CC lib/nvmf/nvmf.o 00:30:34.250 CC lib/scsi/scsi_bdev.o 00:30:34.250 CC lib/scsi/scsi_pr.o 00:30:34.250 CC lib/scsi/scsi_rpc.o 00:30:34.250 LIB libspdk_nbd.a 00:30:34.250 CC lib/ftl/ftl_init.o 00:30:34.250 CC lib/scsi/task.o 00:30:34.250 CC lib/nvmf/transport.o 00:30:34.508 CC lib/nvmf/tcp.o 00:30:34.508 CC lib/nvmf/rdma.o 00:30:34.508 CC lib/ftl/ftl_layout.o 00:30:34.508 CC lib/ftl/ftl_debug.o 00:30:34.508 LIB libspdk_scsi.a 00:30:34.508 CC lib/ftl/ftl_io.o 00:30:34.508 CC lib/ftl/ftl_sb.o 00:30:34.508 CC lib/ftl/ftl_l2p.o 00:30:34.508 CC lib/ftl/ftl_l2p_flat.o 00:30:34.766 CC lib/ftl/ftl_nv_cache.o 00:30:34.766 CC lib/ftl/ftl_band.o 00:30:34.766 CC lib/ftl/ftl_band_ops.o 00:30:34.766 CC lib/ftl/ftl_writer.o 00:30:34.766 CC lib/iscsi/conn.o 00:30:34.766 CC lib/ftl/ftl_rq.o 00:30:34.766 CC lib/ftl/ftl_reloc.o 00:30:34.766 CC lib/vhost/vhost.o 00:30:35.024 CC lib/vhost/vhost_rpc.o 00:30:35.024 CC lib/iscsi/init_grp.o 00:30:35.024 CC lib/vhost/vhost_scsi.o 00:30:35.024 CC lib/iscsi/iscsi.o 00:30:35.024 CC lib/iscsi/md5.o 00:30:35.024 CC lib/vhost/vhost_blk.o 00:30:35.024 CC lib/vhost/rte_vhost_user.o 00:30:35.282 CC lib/iscsi/param.o 00:30:35.282 CC lib/iscsi/portal_grp.o 00:30:35.282 CC lib/ftl/ftl_l2p_cache.o 00:30:35.282 CC lib/ftl/ftl_p2l.o 00:30:35.282 CC lib/ftl/mngt/ftl_mngt.o 00:30:35.282 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:30:35.282 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:30:35.282 LIB libspdk_nvmf.a 00:30:35.540 CC lib/ftl/mngt/ftl_mngt_startup.o 00:30:35.540 CC lib/iscsi/tgt_node.o 00:30:35.540 CC lib/iscsi/iscsi_subsystem.o 00:30:35.540 CC lib/iscsi/iscsi_rpc.o 00:30:35.540 CC lib/iscsi/task.o 00:30:35.541 CC lib/ftl/mngt/ftl_mngt_md.o 00:30:35.541 CC lib/ftl/mngt/ftl_mngt_misc.o 00:30:35.541 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:30:35.798 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:30:35.798 CC lib/ftl/mngt/ftl_mngt_band.o 00:30:35.798 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:30:35.798 LIB libspdk_vhost.a 00:30:35.798 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:30:35.798 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:30:35.798 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:30:35.798 CC lib/ftl/utils/ftl_conf.o 00:30:35.798 CC lib/ftl/utils/ftl_md.o 00:30:35.798 CC lib/ftl/utils/ftl_mempool.o 00:30:35.798 LIB libspdk_iscsi.a 00:30:35.798 CC lib/ftl/utils/ftl_bitmap.o 00:30:35.798 CC lib/ftl/utils/ftl_property.o 00:30:35.798 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:30:36.056 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:30:36.056 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:30:36.056 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:30:36.056 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:30:36.056 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:30:36.056 CC lib/ftl/upgrade/ftl_sb_v3.o 00:30:36.056 CC lib/ftl/upgrade/ftl_sb_v5.o 00:30:36.056 CC lib/ftl/nvc/ftl_nvc_dev.o 00:30:36.056 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:30:36.056 CC lib/ftl/base/ftl_base_dev.o 00:30:36.056 CC lib/ftl/base/ftl_base_bdev.o 00:30:36.315 LIB libspdk_ftl.a 00:30:36.573 CC module/env_dpdk/env_dpdk_rpc.o 00:30:36.573 CC module/sock/posix/posix.o 00:30:36.573 CC module/accel/error/accel_error.o 00:30:36.573 CC module/scheduler/dynamic/scheduler_dynamic.o 00:30:36.573 CC module/scheduler/gscheduler/gscheduler.o 00:30:36.573 CC module/accel/ioat/accel_ioat.o 00:30:36.573 CC module/accel/dsa/accel_dsa.o 00:30:36.573 CC module/accel/iaa/accel_iaa.o 00:30:36.573 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:30:36.573 CC module/blob/bdev/blob_bdev.o 00:30:36.573 LIB libspdk_env_dpdk_rpc.a 00:30:36.573 CC module/accel/iaa/accel_iaa_rpc.o 00:30:36.573 LIB libspdk_scheduler_gscheduler.a 00:30:36.573 LIB libspdk_scheduler_dpdk_governor.a 00:30:36.573 CC module/accel/error/accel_error_rpc.o 00:30:36.573 LIB libspdk_scheduler_dynamic.a 00:30:36.573 CC module/accel/ioat/accel_ioat_rpc.o 00:30:36.573 CC module/accel/dsa/accel_dsa_rpc.o 00:30:36.831 LIB libspdk_accel_iaa.a 00:30:36.831 LIB libspdk_blob_bdev.a 00:30:36.831 LIB libspdk_accel_error.a 00:30:36.831 LIB libspdk_accel_dsa.a 00:30:36.831 LIB libspdk_accel_ioat.a 00:30:36.831 CC module/bdev/delay/vbdev_delay.o 00:30:36.831 CC module/blobfs/bdev/blobfs_bdev.o 00:30:36.831 CC module/bdev/null/bdev_null.o 00:30:36.831 CC module/bdev/error/vbdev_error.o 00:30:36.831 CC module/bdev/gpt/gpt.o 00:30:36.831 CC module/bdev/malloc/bdev_malloc.o 00:30:36.831 CC module/bdev/lvol/vbdev_lvol.o 00:30:36.831 CC module/bdev/nvme/bdev_nvme.o 00:30:36.831 CC module/bdev/passthru/vbdev_passthru.o 00:30:36.831 LIB libspdk_sock_posix.a 00:30:37.089 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:30:37.089 CC module/bdev/gpt/vbdev_gpt.o 00:30:37.089 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:30:37.089 CC module/bdev/null/bdev_null_rpc.o 00:30:37.089 CC module/bdev/delay/vbdev_delay_rpc.o 00:30:37.089 CC module/bdev/error/vbdev_error_rpc.o 00:30:37.089 CC module/bdev/malloc/bdev_malloc_rpc.o 00:30:37.089 LIB libspdk_blobfs_bdev.a 00:30:37.089 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:30:37.089 LIB libspdk_bdev_null.a 00:30:37.089 LIB libspdk_bdev_gpt.a 00:30:37.348 LIB libspdk_bdev_lvol.a 00:30:37.348 LIB libspdk_bdev_delay.a 00:30:37.348 LIB libspdk_bdev_error.a 00:30:37.348 LIB libspdk_bdev_malloc.a 00:30:37.348 LIB libspdk_bdev_passthru.a 00:30:37.348 CC module/bdev/raid/bdev_raid.o 00:30:37.348 CC module/bdev/raid/bdev_raid_rpc.o 00:30:37.348 CC module/bdev/split/vbdev_split.o 00:30:37.348 CC module/bdev/split/vbdev_split_rpc.o 00:30:37.348 CC module/bdev/aio/bdev_aio.o 00:30:37.348 CC module/bdev/zone_block/vbdev_zone_block.o 00:30:37.348 CC module/bdev/ftl/bdev_ftl.o 00:30:37.348 CC module/bdev/iscsi/bdev_iscsi.o 00:30:37.348 CC module/bdev/virtio/bdev_virtio_scsi.o 00:30:37.348 CC module/bdev/ftl/bdev_ftl_rpc.o 00:30:37.348 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:30:37.607 LIB libspdk_bdev_split.a 00:30:37.607 CC module/bdev/aio/bdev_aio_rpc.o 00:30:37.607 CC module/bdev/virtio/bdev_virtio_blk.o 00:30:37.607 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:30:37.607 CC module/bdev/virtio/bdev_virtio_rpc.o 00:30:37.607 CC module/bdev/raid/bdev_raid_sb.o 00:30:37.607 LIB libspdk_bdev_iscsi.a 00:30:37.607 LIB libspdk_bdev_aio.a 00:30:37.607 LIB libspdk_bdev_ftl.a 00:30:37.607 CC module/bdev/nvme/bdev_nvme_rpc.o 00:30:37.607 CC module/bdev/nvme/nvme_rpc.o 00:30:37.607 CC module/bdev/nvme/bdev_mdns_client.o 00:30:37.607 LIB libspdk_bdev_zone_block.a 00:30:37.607 CC module/bdev/raid/raid0.o 00:30:37.607 CC module/bdev/nvme/vbdev_opal.o 00:30:37.866 CC module/bdev/raid/raid1.o 00:30:37.866 LIB libspdk_bdev_virtio.a 00:30:37.866 CC module/bdev/raid/concat.o 00:30:37.866 CC module/bdev/nvme/vbdev_opal_rpc.o 00:30:37.866 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:30:37.866 CC module/bdev/raid/raid5f.o 00:30:38.125 LIB libspdk_bdev_nvme.a 00:30:38.125 LIB libspdk_bdev_raid.a 00:30:38.384 CC module/event/subsystems/sock/sock.o 00:30:38.384 CC module/event/subsystems/vmd/vmd.o 00:30:38.384 CC module/event/subsystems/vmd/vmd_rpc.o 00:30:38.384 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:30:38.384 CC module/event/subsystems/scheduler/scheduler.o 00:30:38.384 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:30:38.384 CC module/event/subsystems/iobuf/iobuf.o 00:30:38.384 LIB libspdk_event_vhost_blk.a 00:30:38.384 LIB libspdk_event_sock.a 00:30:38.384 LIB libspdk_event_vmd.a 00:30:38.384 LIB libspdk_event_scheduler.a 00:30:38.384 LIB libspdk_event_iobuf.a 00:30:38.642 CC module/event/subsystems/accel/accel.o 00:30:38.642 LIB libspdk_event_accel.a 00:30:38.901 CC module/event/subsystems/bdev/bdev.o 00:30:39.159 LIB libspdk_event_bdev.a 00:30:39.159 CC module/event/subsystems/scsi/scsi.o 00:30:39.159 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:30:39.159 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:30:39.159 CC module/event/subsystems/nbd/nbd.o 00:30:39.417 LIB libspdk_event_scsi.a 00:30:39.417 LIB libspdk_event_nbd.a 00:30:39.417 LIB libspdk_event_nvmf.a 00:30:39.417 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:30:39.417 CC module/event/subsystems/iscsi/iscsi.o 00:30:39.674 LIB libspdk_event_vhost_scsi.a 00:30:39.674 LIB libspdk_event_iscsi.a 00:30:39.674 CC app/trace_record/trace_record.o 00:30:39.674 CXX app/trace/trace.o 00:30:39.674 CC examples/accel/perf/accel_perf.o 00:30:39.674 CC examples/nvme/hello_world/hello_world.o 00:30:39.932 CC examples/vmd/lsvmd/lsvmd.o 00:30:39.932 CC examples/sock/hello_world/hello_sock.o 00:30:39.932 CC examples/ioat/perf/perf.o 00:30:39.932 CC test/accel/dif/dif.o 00:30:39.932 CC examples/bdev/hello_world/hello_bdev.o 00:30:39.932 CC examples/blob/hello_world/hello_blob.o 00:30:39.932 LINK lsvmd 00:30:39.932 LINK spdk_trace_record 00:30:39.932 LINK hello_world 00:30:39.932 LINK ioat_perf 00:30:40.190 LINK hello_blob 00:30:40.190 LINK hello_sock 00:30:40.190 LINK hello_bdev 00:30:40.190 LINK accel_perf 00:30:40.190 LINK dif 00:30:40.190 LINK spdk_trace 00:30:44.396 CC examples/vmd/led/led.o 00:30:44.963 LINK led 00:30:49.152 CC examples/ioat/verify/verify.o 00:30:50.089 LINK verify 00:30:51.465 CC app/nvmf_tgt/nvmf_main.o 00:30:52.843 LINK nvmf_tgt 00:30:57.024 CC test/app/bdev_svc/bdev_svc.o 00:30:57.956 LINK bdev_svc 00:31:04.514 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:31:04.514 CC test/app/histogram_perf/histogram_perf.o 00:31:05.082 LINK nvme_fuzz 00:31:05.344 CC test/app/jsoncat/jsoncat.o 00:31:05.344 LINK histogram_perf 00:31:05.912 LINK jsoncat 00:31:05.912 CC examples/nvme/reconnect/reconnect.o 00:31:07.288 LINK reconnect 00:31:10.575 CC test/app/stub/stub.o 00:31:11.508 LINK stub 00:31:21.509 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:31:25.698 CC examples/blob/cli/blobcli.o 00:31:28.229 LINK blobcli 00:31:29.604 LINK iscsi_fuzz 00:31:51.654 CC examples/nvme/nvme_manage/nvme_manage.o 00:31:51.654 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:31:51.912 LINK nvme_manage 00:31:52.479 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:31:55.766 LINK vhost_fuzz 00:31:59.957 CC test/bdev/bdevio/bdevio.o 00:32:02.487 LINK bdevio 00:32:17.389 CC test/blobfs/mkfs/mkfs.o 00:32:18.322 LINK mkfs 00:32:24.884 TEST_HEADER include/spdk/config.h 00:32:24.884 CXX test/cpp_headers/accel_module.o 00:32:25.142 CXX test/cpp_headers/bit_pool.o 00:32:26.518 CXX test/cpp_headers/ioat.o 00:32:27.892 CXX test/cpp_headers/blobfs.o 00:32:29.266 CXX test/cpp_headers/notify.o 00:32:30.640 CXX test/cpp_headers/pipe.o 00:32:32.014 CXX test/cpp_headers/accel.o 00:32:33.390 CXX test/cpp_headers/file.o 00:32:34.765 CXX test/cpp_headers/version.o 00:32:34.765 CXX test/cpp_headers/trace_parser.o 00:32:35.700 CC examples/nvme/arbitration/arbitration.o 00:32:35.959 CXX test/cpp_headers/opal_spec.o 00:32:37.337 CXX test/cpp_headers/uuid.o 00:32:37.337 LINK arbitration 00:32:37.905 CXX test/cpp_headers/likely.o 00:32:38.840 CXX test/cpp_headers/dif.o 00:32:40.217 CXX test/cpp_headers/memory.o 00:32:41.152 CXX test/cpp_headers/vfio_user_pci.o 00:32:42.088 CXX test/cpp_headers/dma.o 00:32:43.026 CXX test/cpp_headers/nbd.o 00:32:43.285 CXX test/cpp_headers/conf.o 00:32:44.221 CXX test/cpp_headers/env_dpdk.o 00:32:45.156 CXX test/cpp_headers/nvmf_spec.o 00:32:45.156 CXX test/cpp_headers/iscsi_spec.o 00:32:45.774 CC examples/bdev/bdevperf/bdevperf.o 00:32:46.341 CXX test/cpp_headers/mmio.o 00:32:46.600 CC examples/nvme/hotplug/hotplug.o 00:32:46.859 CXX test/cpp_headers/json.o 00:32:48.235 CXX test/cpp_headers/opal.o 00:32:48.235 LINK hotplug 00:32:48.493 CC test/dma/test_dma/test_dma.o 00:32:49.058 LINK bdevperf 00:32:49.316 CXX test/cpp_headers/bdev.o 00:32:50.250 LINK test_dma 00:32:50.250 CXX test/cpp_headers/base64.o 00:32:51.624 CXX test/cpp_headers/blobfs_bdev.o 00:32:52.997 CXX test/cpp_headers/nvme_ocssd.o 00:32:54.370 CXX test/cpp_headers/fd.o 00:32:55.307 CXX test/cpp_headers/barrier.o 00:32:56.685 CXX test/cpp_headers/scsi_spec.o 00:32:58.062 CXX test/cpp_headers/zipf.o 00:32:59.438 CXX test/cpp_headers/nvmf.o 00:33:00.816 CXX test/cpp_headers/queue.o 00:33:01.073 CXX test/cpp_headers/xor.o 00:33:02.447 CXX test/cpp_headers/cpuset.o 00:33:03.382 CXX test/cpp_headers/thread.o 00:33:04.759 CXX test/cpp_headers/bdev_zone.o 00:33:06.137 CXX test/cpp_headers/fd_group.o 00:33:07.514 CXX test/cpp_headers/tree.o 00:33:07.514 CXX test/cpp_headers/blob_bdev.o 00:33:09.418 CXX test/cpp_headers/crc64.o 00:33:10.379 CXX test/cpp_headers/assert.o 00:33:11.753 CXX test/cpp_headers/nvme_spec.o 00:33:13.125 CC test/env/vtophys/vtophys.o 00:33:13.383 CC test/env/mem_callbacks/mem_callbacks.o 00:33:13.383 CXX test/cpp_headers/endian.o 00:33:14.318 LINK vtophys 00:33:14.885 CXX test/cpp_headers/pci_ids.o 00:33:16.262 CXX test/cpp_headers/log.o 00:33:16.829 LINK mem_callbacks 00:33:17.088 CXX test/cpp_headers/nvme_ocssd_spec.o 00:33:18.465 CXX test/cpp_headers/ftl.o 00:33:19.843 CXX test/cpp_headers/config.o 00:33:19.843 CXX test/cpp_headers/vhost.o 00:33:21.218 CXX test/cpp_headers/bdev_module.o 00:33:21.476 CXX test/cpp_headers/nvme_intel.o 00:33:22.413 CXX test/cpp_headers/idxd_spec.o 00:33:22.670 CC examples/nvme/cmb_copy/cmb_copy.o 00:33:23.604 CXX test/cpp_headers/crc16.o 00:33:23.604 LINK cmb_copy 00:33:24.170 CXX test/cpp_headers/nvme.o 00:33:25.104 CC test/event/event_perf/event_perf.o 00:33:25.104 CXX test/cpp_headers/stdinc.o 00:33:26.038 LINK event_perf 00:33:26.038 CXX test/cpp_headers/scsi.o 00:33:27.414 CXX test/cpp_headers/nvmf_fc_spec.o 00:33:28.350 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:33:28.918 CXX test/cpp_headers/idxd.o 00:33:29.485 LINK env_dpdk_post_init 00:33:30.053 CXX test/cpp_headers/hexlify.o 00:33:30.987 CC app/iscsi_tgt/iscsi_tgt.o 00:33:30.987 CXX test/cpp_headers/reduce.o 00:33:31.921 CXX test/cpp_headers/crc32.o 00:33:31.921 LINK iscsi_tgt 00:33:32.854 CXX test/cpp_headers/init.o 00:33:34.234 CXX test/cpp_headers/nvmf_transport.o 00:33:35.637 CXX test/cpp_headers/nvme_zns.o 00:33:37.014 CXX test/cpp_headers/vfio_user_spec.o 00:33:37.948 CXX test/cpp_headers/util.o 00:33:37.948 CXX test/cpp_headers/jsonrpc.o 00:33:39.324 CXX test/cpp_headers/env.o 00:33:39.324 CC test/event/reactor/reactor.o 00:33:40.259 CXX test/cpp_headers/nvmf_cmd.o 00:33:40.259 LINK reactor 00:33:41.636 CXX test/cpp_headers/lvol.o 00:33:43.014 CXX test/cpp_headers/histogram_data.o 00:33:43.951 CXX test/cpp_headers/event.o 00:33:44.519 CC test/event/reactor_perf/reactor_perf.o 00:33:45.087 CXX test/cpp_headers/trace.o 00:33:45.655 LINK reactor_perf 00:33:46.225 CXX test/cpp_headers/ioat_spec.o 00:33:47.628 CXX test/cpp_headers/string.o 00:33:49.005 CXX test/cpp_headers/ublk.o 00:33:50.385 CXX test/cpp_headers/bit_array.o 00:33:51.765 CXX test/cpp_headers/scheduler.o 00:33:53.144 CXX test/cpp_headers/blob.o 00:33:54.520 CXX test/cpp_headers/gpt_spec.o 00:33:55.457 CXX test/cpp_headers/sock.o 00:33:56.836 CXX test/cpp_headers/vmd.o 00:33:58.211 CXX test/cpp_headers/rpc.o 00:33:58.211 CC examples/nvme/abort/abort.o 00:33:59.180 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:34:00.117 LINK abort 00:34:00.376 LINK pmr_persistence 00:34:00.633 CC app/spdk_tgt/spdk_tgt.o 00:34:01.568 LINK spdk_tgt 00:34:03.471 CC test/env/memory/memory_ut.o 00:34:03.730 CC test/event/app_repeat/app_repeat.o 00:34:04.667 LINK app_repeat 00:34:08.858 LINK memory_ut 00:34:10.762 CC test/event/scheduler/scheduler.o 00:34:11.698 LINK scheduler 00:34:19.808 CC test/env/pci/pci_ut.o 00:34:20.374 LINK pci_ut 00:34:20.632 CC app/spdk_lspci/spdk_lspci.o 00:34:22.008 LINK spdk_lspci 00:34:31.983 CC test/lvol/esnap/esnap.o 00:34:34.512 CC test/nvme/aer/aer.o 00:34:34.771 CC test/nvme/reset/reset.o 00:34:36.675 LINK aer 00:34:36.675 LINK reset 00:34:46.649 CC test/nvme/sgl/sgl.o 00:34:47.216 LINK sgl 00:34:50.501 LINK esnap 00:34:52.402 CC test/nvme/e2edp/nvme_dp.o 00:34:53.338 LINK nvme_dp 00:34:55.242 CC test/nvme/overhead/overhead.o 00:34:56.630 CC examples/nvmf/nvmf/nvmf.o 00:34:57.223 LINK overhead 00:34:58.600 LINK nvmf 00:35:16.716 CC test/nvme/err_injection/err_injection.o 00:35:16.716 CC test/nvme/startup/startup.o 00:35:16.716 LINK err_injection 00:35:17.653 LINK startup 00:35:25.767 CC examples/util/zipf/zipf.o 00:35:26.333 LINK zipf 00:35:30.521 CC test/rpc_client/rpc_client_test.o 00:35:31.455 LINK rpc_client_test 00:35:33.354 CC test/thread/poller_perf/poller_perf.o 00:35:34.288 LINK poller_perf 00:35:34.288 CC test/thread/lock/spdk_lock.o 00:35:36.818 CC examples/thread/thread/thread_ex.o 00:35:37.754 LINK thread 00:35:40.288 LINK spdk_lock 00:35:50.264 CC app/spdk_nvme_perf/perf.o 00:35:50.264 CC app/spdk_nvme_identify/identify.o 00:35:51.648 CC test/nvme/reserve/reserve.o 00:35:52.215 LINK spdk_nvme_perf 00:35:52.215 LINK reserve 00:35:52.473 LINK spdk_nvme_identify 00:35:52.732 CC app/spdk_nvme_discover/discovery_aer.o 00:35:52.732 CC test/nvme/simple_copy/simple_copy.o 00:35:53.300 LINK spdk_nvme_discover 00:35:53.558 LINK simple_copy 00:35:55.459 CC app/spdk_top/spdk_top.o 00:36:00.755 LINK spdk_top 00:36:27.297 CC test/nvme/connect_stress/connect_stress.o 00:36:27.297 LINK connect_stress 00:36:27.297 CC test/nvme/boot_partition/boot_partition.o 00:36:27.864 LINK boot_partition 00:36:32.052 CC test/nvme/compliance/nvme_compliance.o 00:36:33.445 LINK nvme_compliance 00:36:33.445 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:36:34.380 CC test/unit/lib/accel/accel.c/accel_ut.o 00:36:34.639 LINK histogram_ut 00:36:37.170 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:36:40.457 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:36:41.417 LINK accel_ut 00:36:41.675 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:36:41.934 LINK blob_bdev_ut 00:36:42.501 LINK tree_ut 00:36:47.795 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:36:51.985 LINK blobfs_async_ut 00:36:52.551 LINK bdev_ut 00:36:52.551 CC app/vhost/vhost.o 00:36:52.810 CC test/unit/lib/blob/blob.c/blob_ut.o 00:36:53.068 LINK vhost 00:36:54.442 CC app/spdk_dd/spdk_dd.o 00:36:55.377 CC app/fio/nvme/fio_plugin.o 00:36:55.377 LINK spdk_dd 00:36:55.636 CC test/nvme/fused_ordering/fused_ordering.o 00:36:56.572 LINK fused_ordering 00:36:57.140 LINK spdk_nvme 00:37:02.443 CC examples/idxd/perf/perf.o 00:37:03.820 LINK idxd_perf 00:37:04.387 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:37:08.576 LINK blobfs_sync_ut 00:37:12.766 LINK blob_ut 00:37:20.881 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:37:21.816 CC test/unit/lib/bdev/part.c/part_ut.o 00:37:21.816 LINK blobfs_bdev_ut 00:37:26.004 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:37:26.263 LINK scsi_nvme_ut 00:37:26.830 CC test/nvme/doorbell_aers/doorbell_aers.o 00:37:27.766 LINK doorbell_aers 00:37:28.024 CC examples/interrupt_tgt/interrupt_tgt.o 00:37:28.592 LINK interrupt_tgt 00:37:29.158 CC test/nvme/fdp/fdp.o 00:37:30.095 LINK fdp 00:37:30.095 LINK part_ut 00:37:30.353 CC test/nvme/cuse/cuse.o 00:37:32.254 CC test/unit/lib/dma/dma.c/dma_ut.o 00:37:33.189 LINK dma_ut 00:37:34.124 LINK cuse 00:37:35.059 CC app/fio/bdev/fio_plugin.o 00:37:37.605 LINK spdk_bdev 00:37:38.172 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:37:39.548 CC test/unit/lib/event/app.c/app_ut.o 00:37:39.807 LINK gpt_ut 00:37:41.182 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:37:41.182 LINK app_ut 00:37:44.466 LINK reactor_ut 00:37:44.725 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:37:46.629 LINK ioat_ut 00:37:49.916 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:37:49.916 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:37:50.850 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:37:52.752 LINK conn_ut 00:37:53.319 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:37:53.578 LINK vbdev_lvol_ut 00:37:53.837 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:37:54.772 LINK init_grp_ut 00:37:54.772 LINK jsonrpc_server_ut 00:37:56.674 LINK json_parse_ut 00:37:56.933 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:37:59.468 CC test/unit/lib/iscsi/param.c/param_ut.o 00:38:01.373 LINK param_ut 00:38:03.901 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:38:04.466 LINK iscsi_ut 00:38:04.723 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:38:05.290 CC test/unit/lib/log/log.c/log_ut.o 00:38:05.548 LINK portal_grp_ut 00:38:06.115 LINK log_ut 00:38:11.382 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:38:11.639 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:38:13.543 LINK tgt_node_ut 00:38:15.449 LINK bdev_ut 00:38:16.820 CC test/unit/lib/notify/notify.c/notify_ut.o 00:38:17.385 LINK lvol_ut 00:38:17.643 LINK notify_ut 00:38:18.208 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:38:18.465 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:38:21.748 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:38:22.682 LINK nvme_ut 00:38:23.250 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:38:26.535 LINK tcp_ut 00:38:26.535 LINK ctrlr_ut 00:38:26.535 LINK subsystem_ut 00:38:26.535 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:38:27.471 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:38:27.729 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:38:28.013 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:38:28.955 LINK bdev_raid_sb_ut 00:38:29.888 LINK ctrlr_discovery_ut 00:38:31.788 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:38:32.354 LINK bdev_raid_ut 00:38:33.726 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:38:34.291 LINK nvme_ctrlr_ut 00:38:34.291 LINK nvme_ctrlr_cmd_ut 00:38:35.226 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:38:35.793 LINK nvme_ctrlr_ocssd_cmd_ut 00:38:35.793 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:38:36.358 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:38:37.293 LINK nvme_ns_ut 00:38:37.293 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:38:38.229 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:38:38.229 LINK nvme_ns_cmd_ut 00:38:38.487 LINK ctrlr_bdev_ut 00:38:38.745 LINK nvme_ns_ocssd_cmd_ut 00:38:39.003 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:38:39.595 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:38:39.595 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:38:39.853 LINK concat_ut 00:38:40.420 LINK nvme_pcie_ut 00:38:40.420 LINK raid1_ut 00:38:40.420 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:38:41.793 LINK nvme_poll_group_ut 00:38:42.725 LINK nvme_qpair_ut 00:38:43.290 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:38:43.291 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:38:44.224 LINK nvme_quirks_ut 00:38:44.224 LINK nvmf_ut 00:38:44.224 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:38:44.482 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:38:44.740 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:38:44.740 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:38:45.306 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:38:45.563 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:38:45.819 LINK nvme_transport_ut 00:38:46.077 LINK raid5f_ut 00:38:46.335 LINK nvme_io_msg_ut 00:38:46.335 LINK nvme_tcp_ut 00:38:46.902 LINK rdma_ut 00:38:46.902 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:38:47.160 LINK transport_ut 00:38:47.418 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:38:47.418 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:38:47.677 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:38:47.935 LINK bdev_zone_ut 00:38:48.871 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:38:48.871 LINK nvme_opal_ut 00:38:49.129 LINK nvme_fabric_ut 00:38:49.387 LINK nvme_pcie_common_ut 00:38:49.387 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:38:50.817 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:38:51.382 LINK nvme_rdma_ut 00:38:51.640 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:38:51.640 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:38:51.899 LINK nvme_cuse_ut 00:38:51.899 LINK vbdev_zone_block_ut 00:38:52.465 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:38:52.465 LINK json_util_ut 00:38:53.031 LINK dev_ut 00:38:53.596 CC test/unit/lib/sock/sock.c/sock_ut.o 00:38:54.163 CC test/unit/lib/thread/thread.c/thread_ut.o 00:38:54.422 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:38:54.422 LINK bdev_nvme_ut 00:38:54.422 LINK sock_ut 00:38:54.680 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:38:54.680 LINK iobuf_ut 00:38:54.939 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:38:55.196 LINK lun_ut 00:38:55.196 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:38:55.196 LINK scsi_ut 00:38:55.762 LINK thread_ut 00:38:55.762 CC test/unit/lib/sock/posix.c/posix_ut.o 00:38:55.762 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:38:56.697 LINK scsi_pr_ut 00:38:56.956 LINK scsi_bdev_ut 00:38:57.214 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:38:57.473 LINK posix_ut 00:38:58.407 CC test/unit/lib/util/base64.c/base64_ut.o 00:38:58.974 LINK json_write_ut 00:38:58.974 LINK base64_ut 00:38:59.232 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:38:59.798 LINK bit_array_ut 00:39:00.367 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:39:00.644 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:39:00.644 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:39:00.644 LINK cpuset_ut 00:39:00.644 LINK crc16_ut 00:39:00.644 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:39:00.903 LINK pci_event_ut 00:39:00.903 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:39:00.903 LINK crc32_ieee_ut 00:39:00.903 LINK subsystem_ut 00:39:01.161 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:39:01.161 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:39:01.161 LINK crc64_ut 00:39:01.161 LINK crc32c_ut 00:39:01.419 CC test/unit/lib/util/iov.c/iov_ut.o 00:39:01.419 CC test/unit/lib/util/dif.c/dif_ut.o 00:39:01.419 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:39:01.419 LINK iov_ut 00:39:01.419 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:39:01.678 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:39:01.678 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:39:01.678 CC test/unit/lib/util/math.c/math_ut.o 00:39:01.678 LINK rpc_ut 00:39:01.678 LINK math_ut 00:39:01.678 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:39:01.937 LINK idxd_user_ut 00:39:01.937 LINK idxd_ut 00:39:02.195 LINK pipe_ut 00:39:02.195 LINK dif_ut 00:39:02.454 CC test/unit/lib/rdma/common.c/common_ut.o 00:39:02.713 LINK common_ut 00:39:02.713 CC test/unit/lib/util/string.c/string_ut.o 00:39:02.713 LINK vhost_ut 00:39:02.971 CC test/unit/lib/util/xor.c/xor_ut.o 00:39:03.230 LINK string_ut 00:39:03.230 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:39:03.230 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:39:03.230 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:39:03.489 LINK xor_ut 00:39:03.489 LINK ftl_l2p_ut 00:39:03.489 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:39:04.057 LINK ftl_bitmap_ut 00:39:04.057 LINK ftl_band_ut 00:39:04.316 LINK ftl_io_ut 00:39:04.316 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:39:04.575 LINK ftl_mempool_ut 00:39:04.575 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:39:04.833 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:39:04.833 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:39:05.091 LINK ftl_mngt_ut 00:39:05.659 LINK ftl_sb_ut 00:39:05.659 LINK ftl_layout_upgrade_ut 00:40:13.340 21:36:26 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:40:13.340 make[1]: Nothing to be done for 'clean'. 00:40:13.340 21:36:31 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:40:13.340 21:36:31 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:40:13.340 21:36:31 -- common/autotest_common.sh@10 -- $ set +x 00:40:13.340 21:36:31 -- spdk/autopackage.sh@48 -- $ timing_finish 00:40:13.340 21:36:31 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:13.340 21:36:31 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:40:13.340 21:36:31 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:40:13.340 + [[ -n 2597 ]] 00:40:13.340 + sudo kill 2597 00:40:13.340 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:40:13.350 [Pipeline] } 00:40:13.371 [Pipeline] // timeout 00:40:13.377 [Pipeline] } 00:40:13.395 [Pipeline] // stage 00:40:13.400 [Pipeline] } 00:40:13.418 [Pipeline] // catchError 00:40:13.427 [Pipeline] stage 00:40:13.430 [Pipeline] { (Stop VM) 00:40:13.444 [Pipeline] sh 00:40:13.723 + vagrant halt 00:40:17.012 ==> default: Halting domain... 00:40:25.148 [Pipeline] sh 00:40:25.424 + vagrant destroy -f 00:40:28.704 ==> default: Removing domain... 00:40:30.090 [Pipeline] sh 00:40:30.369 + mv output /var/jenkins/workspace/ubuntu20-vg-autotest/output 00:40:30.378 [Pipeline] } 00:40:30.395 [Pipeline] // stage 00:40:30.401 [Pipeline] } 00:40:30.417 [Pipeline] // dir 00:40:30.423 [Pipeline] } 00:40:30.440 [Pipeline] // wrap 00:40:30.446 [Pipeline] } 00:40:30.460 [Pipeline] // catchError 00:40:30.468 [Pipeline] stage 00:40:30.470 [Pipeline] { (Epilogue) 00:40:30.485 [Pipeline] sh 00:40:30.764 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:48.891 [Pipeline] catchError 00:40:48.893 [Pipeline] { 00:40:48.908 [Pipeline] sh 00:40:49.187 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:49.187 Artifacts sizes are good 00:40:49.196 [Pipeline] } 00:40:49.214 [Pipeline] // catchError 00:40:49.227 [Pipeline] archiveArtifacts 00:40:49.234 Archiving artifacts 00:40:49.595 [Pipeline] cleanWs 00:40:49.607 [WS-CLEANUP] Deleting project workspace... 00:40:49.607 [WS-CLEANUP] Deferred wipeout is used... 00:40:49.613 [WS-CLEANUP] done 00:40:49.615 [Pipeline] } 00:40:49.632 [Pipeline] // stage 00:40:49.637 [Pipeline] } 00:40:49.653 [Pipeline] // node 00:40:49.659 [Pipeline] End of Pipeline 00:40:49.694 Finished: SUCCESS